[GitHub] [incubator-mxnet] sxjscience commented on issue #17683: Fix reverse shape inference in LayerNorm

2020-02-24 Thread GitBox
sxjscience commented on issue #17683: Fix reverse shape inference in LayerNorm
URL: https://github.com/apache/incubator-mxnet/pull/17683#issuecomment-590727858
 
 
   @ZheyuYe 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience opened a new pull request #17683: Fix reverse shape inference in LayerNorm

2020-02-24 Thread GitBox
sxjscience opened a new pull request #17683: Fix reverse shape inference in 
LayerNorm
URL: https://github.com/apache/incubator-mxnet/pull/17683
 
 
   Fix https://github.com/apache/incubator-mxnet/issues/17654. Now, the user 
will see an error message if the `in_channels` does not match with the 
corresponding dimension in the input
   
   After the PR, the following will raise an error
   ```python
   import mxnet as mx
   from mxnet.gluon import nn
   net = nn.LayerNorm(in_channels=10)
   net.initialize()
   net.hybridize()
   out = net(mx.nd.ones((2, 11)))  # Trigger the error
   ```
   
   Error:
   ```
   MXNetError: MXNetError: Error in operator layernorm0_layernorm0: Shape 
inconsistent, Provided = [10], inferred shape=[11]
   
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #17599: [Large Tensor] Fixed Embedding op

2020-02-24 Thread GitBox
apeforest merged pull request #17599: [Large Tensor] Fixed Embedding op
URL: https://github.com/apache/incubator-mxnet/pull/17599
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [Large Tensor] Fixed Embedding op (#17599)

2020-02-24 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new d51753b  [Large Tensor] Fixed Embedding op (#17599)
d51753b is described below

commit d51753bd87fb24509c01b7e9f8f638dd1026c515
Author: Connor Goggins 
AuthorDate: Mon Feb 24 23:33:10 2020 -0800

[Large Tensor] Fixed Embedding op (#17599)

* Switched from int to index_t for input_dim

* Implemented fix for output_dim

* Added nightly test for Embedding

* Set const value for output dim

* More standardization via const param
---
 src/operator/tensor/indexing_op.h |  8 
 tests/nightly/test_large_array.py | 13 +
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/src/operator/tensor/indexing_op.h 
b/src/operator/tensor/indexing_op.h
index 7b6c16a..5449fbe 100644
--- a/src/operator/tensor/indexing_op.h
+++ b/src/operator/tensor/indexing_op.h
@@ -66,8 +66,8 @@ enum QuantizedEmbeddingOpResource {kTempSpace};
 
 
 struct SparseEmbeddingParam: public dmlc::Parameter {
-  int input_dim;
-  int output_dim;
+  index_t input_dim;
+  index_t output_dim;
   int dtype;
   bool deterministic;
   DMLC_DECLARE_PARAMETER(SparseEmbeddingParam) {
@@ -89,8 +89,8 @@ struct SparseEmbeddingParam: public 
dmlc::Parameter {
 };
 
 struct EmbeddingParam: public dmlc::Parameter {
-  int input_dim;
-  int output_dim;
+  index_t input_dim;
+  index_t output_dim;
   int dtype;
   bool sparse_grad;
   DMLC_DECLARE_PARAMETER(EmbeddingParam) {
diff --git a/tests/nightly/test_large_array.py 
b/tests/nightly/test_large_array.py
index edf796c..8b36d09 100644
--- a/tests/nightly/test_large_array.py
+++ b/tests/nightly/test_large_array.py
@@ -38,6 +38,7 @@ LARGE_X = 1
 SMALL_X = 100
 SMALL_Y = 50
 LARGE_SIZE = LARGE_X * SMALL_Y
+LARGE_TENSOR_SHAPE = 2**32
 
 
 def test_nn():
@@ -467,6 +468,17 @@ def test_nn():
 assert res.shape[2] == 2
 assert res.shape[3] == 2
 assert res.shape[4] == 1
+def check_embedding():
+data = nd.random_normal(shape=(LARGE_TENSOR_SHAPE, 1))
+weight = nd.random_normal(shape=(LARGE_TENSOR_SHAPE, 1))
+input_dim = LARGE_TENSOR_SHAPE
+output_dim = 1
+
+out = nd.Embedding(data=data, weight=weight, input_dim=input_dim, 
output_dim=output_dim)
+
+assert out.shape[0] == LARGE_TENSOR_SHAPE
+assert out.shape[1] == 1
+assert out.shape[2] == 1
 
 check_gluon_embedding()
 check_fully_connected()
@@ -488,6 +500,7 @@ def test_nn():
 check_l2_normalization()
 check_instance_norm()
 check_col2im()
+check_embedding()
 
 
 def test_tensor():



[GitHub] [incubator-mxnet] leezu commented on issue #17680: Cannot run pip install (error)

2020-02-24 Thread GitBox
leezu commented on issue #17680: Cannot run pip install (error)
URL: 
https://github.com/apache/incubator-mxnet/issues/17680#issuecomment-590722547
 
 
   Try with the `-e` option. If that works, please update the title of this 
issue to be more specific. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
leezu edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590716897
 
 
   As it's rather difficult to disable clang linking with llvm openmp, we can 
disable linking intel openmp until a solution is found to only link against 
intel openmp (may require compiling with icc) or `mkl_rt.so` is fixed to prefer 
already present `libomp.so` over dynamically loading `libiomp5.so` and running 
into the conflict..
   
   To stop linking with intel openmp, remove / disable
   
   
https://github.com/apache/incubator-mxnet/blob/31144c763bfd0fe199b7fe0f23a20555c9731e7a/cmake/Modules/FindMKL.cmake#L126-L145
   
   and set `-DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF`.
   
   Then we get
   
   ```
0x0001 (NEEDED) Shared library: [libdl.so.2]
0x0001 (NEEDED) Shared library: [libpthread.so.0]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_lp64.so]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_thread.so]
0x0001 (NEEDED) Shared library: [libmkl_core.so]
0x0001 (NEEDED) Shared library: [librt.so.1]
0x0001 (NEEDED) Shared library: 
[libopencv_highgui.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgcodecs.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgproc.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_core.so.3.2]
0x0001 (NEEDED) Shared library: [liblapack.so.3]
0x0001 (NEEDED) Shared library: [libstdc++.so.6]
0x0001 (NEEDED) Shared library: [libm.so.6]
0x0001 (NEEDED) Shared library: [libomp.so.5]
0x0001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0001 (NEEDED) Shared library: [libc.so.6]
0x0001 (NEEDED) Shared library: 
[ld-linux-x86-64.so.2]
   ```
   
   and `libmkl` shouldn't attempt loading `iomp` dynamically.
   This works because llvm openmp is compatible with `iomp`.
   
   @icemelon9 I can't reproduce this problem. Please comment out the lines 
pointed out above and compile with `cmake -DUSE_CUDA=0 -DUSE_MKLDNN=1 
DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF -GNinja ..; ninja`
   
   @cjolivier01 any suggestion how to disable linking llvm openmp for clang if 
`iomp5` is present?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
leezu edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590716897
 
 
   As it's rather difficult to disable clang linking with llvm openmp, we can 
disable linking intel openmp until a solution is found to only link against 
intel openmp (may require compiling with icc) or `mkl_rt.so` is fixed to prefer 
already present `libomp.so` over dynamically loading `libiomp5.so` and running 
into the conflict..
   
   To stop linking with intel openmp, remove / disable
   
   
https://github.com/apache/incubator-mxnet/blob/31144c763bfd0fe199b7fe0f23a20555c9731e7a/cmake/Modules/FindMKL.cmake#L126-L145
   
   and set `-DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF`.
   
   Then we get
   
   ```
0x0001 (NEEDED) Shared library: [libdl.so.2]
0x0001 (NEEDED) Shared library: [libpthread.so.0]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_lp64.so]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_thread.so]
0x0001 (NEEDED) Shared library: [libmkl_core.so]
0x0001 (NEEDED) Shared library: [librt.so.1]
0x0001 (NEEDED) Shared library: 
[libopencv_highgui.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgcodecs.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgproc.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_core.so.3.2]
0x0001 (NEEDED) Shared library: [liblapack.so.3]
0x0001 (NEEDED) Shared library: [libstdc++.so.6]
0x0001 (NEEDED) Shared library: [libm.so.6]
0x0001 (NEEDED) Shared library: [libomp.so.5]
0x0001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0001 (NEEDED) Shared library: [libc.so.6]
0x0001 (NEEDED) Shared library: 
[ld-linux-x86-64.so.2]
   ```
   
   and `libmkl` shouldn't attempt loading `iomp` dynamically.
   This works because llvm openmp is compatible with `iomp`.
   
   @icemelon9 I can't reproduce this problem. Please comment out the lines 
pointed out above and compile with `cmake -DUSE_CUDA=0 -DUSE_MKLDNN=1 
DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF -GNinja ..; ninja`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
leezu commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590716897
 
 
   As it's rather difficult to disable clang linking with llvm openmp, we can 
disable linking intel openmp until a solution is found to only link against 
intel openmp. For that, it may be required to compile with intel compiler.
   
   To stop linking with intel openmp, remove / disable
   
   
https://github.com/apache/incubator-mxnet/blob/31144c763bfd0fe199b7fe0f23a20555c9731e7a/cmake/Modules/FindMKL.cmake#L126-L145
   
   and set `-DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF`.
   
   Then we get
   
   ```
0x0001 (NEEDED) Shared library: [libdl.so.2]
0x0001 (NEEDED) Shared library: [libpthread.so.0]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_lp64.so]
0x0001 (NEEDED) Shared library: 
[libmkl_intel_thread.so]
0x0001 (NEEDED) Shared library: [libmkl_core.so]
0x0001 (NEEDED) Shared library: [librt.so.1]
0x0001 (NEEDED) Shared library: 
[libopencv_highgui.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgcodecs.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_imgproc.so.3.2]
0x0001 (NEEDED) Shared library: 
[libopencv_core.so.3.2]
0x0001 (NEEDED) Shared library: [liblapack.so.3]
0x0001 (NEEDED) Shared library: [libstdc++.so.6]
0x0001 (NEEDED) Shared library: [libm.so.6]
0x0001 (NEEDED) Shared library: [libomp.so.5]
0x0001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0001 (NEEDED) Shared library: [libc.so.6]
0x0001 (NEEDED) Shared library: 
[ld-linux-x86-64.so.2]
   ```
   
   and `libmkl` shouldn't attempt loading `iomp` dynamically.
   This works because llvm openmp is compatible with `iomp`.
   
   @icemelon9 I can't reproduce this problem. Please comment out the lines 
pointed out above and compile with `cmake -DUSE_CUDA=0 -DUSE_MKLDNN=1 
DMKL_USE_SINGLE_DYNAMIC_LIBRARY=OFF -GNinja ..; ninja`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17682: Revert "MXNet FFI for Operator Imperative Invocation"

2020-02-24 Thread GitBox
leezu commented on issue #17682: Revert "MXNet FFI for Operator Imperative 
Invocation"
URL: https://github.com/apache/incubator-mxnet/pull/17682#issuecomment-590712094
 
 
   @hzfan feel free to provide a quick fix and I'll close this. Otherwise let's 
revert FFI until fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch revert-17510-poc-tvmffi_pr created (now 089dc17)

2020-02-24 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch revert-17510-poc-tvmffi_pr
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 089dc17  Revert "MXNet FFI for Operator Imperative Invocation (#17510)"

This branch includes the following new commits:

 new 089dc17  Revert "MXNet FFI for Operator Imperative Invocation (#17510)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[GitHub] [incubator-mxnet] leezu opened a new pull request #17682: Revert "MXNet FFI for Operator Imperative Invocation"

2020-02-24 Thread GitBox
leezu opened a new pull request #17682: Revert "MXNet FFI for Operator 
Imperative Invocation"
URL: https://github.com/apache/incubator-mxnet/pull/17682
 
 
   Reverts apache/incubator-mxnet#17510
   
   Breaks cmake build: `libmxnet.so: undefined symbol: MXNetFuncListGlobalNames`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch revert-17510-poc-tvmffi_pr created (now 089dc17)

2020-02-24 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch revert-17510-poc-tvmffi_pr
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 089dc17  Revert "MXNet FFI for Operator Imperative Invocation (#17510)"

This branch includes the following new commits:

 new 089dc17  Revert "MXNet FFI for Operator Imperative Invocation (#17510)"

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-24 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 90983af  Bump the publish timestamp.
90983af is described below

commit 90983afccc58fdb404c08e21b82430d033dddfc2
Author: mxnet-ci 
AuthorDate: Tue Feb 25 06:43:38 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..30fa6fd
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Feb 25 06:43:38 UTC 2020



[GitHub] [incubator-mxnet] szha opened a new pull request #17681: [CD] update pypi description, setup.py

2020-02-24 Thread GitBox
szha opened a new pull request #17681: [CD] update pypi description, setup.py
URL: https://github.com/apache/incubator-mxnet/pull/17681
 
 
   ## Description ##
   update pypi description, setup.py
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] update pypi description to include nightly manifest link
   - [x] update setup.py


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yerzhik opened a new issue #17680: Cannot run pip install (error)

2020-02-24 Thread GitBox
yerzhik opened a new issue #17680: Cannot run pip install (error)
URL: https://github.com/apache/incubator-mxnet/issues/17680
 
 
   ## Description
   succesfuly run make (cmake, make)
   then, after running pip install --user . (without -e option)
   getting error message (below)
   ### Error Message
   ```
   WARNING: pip is being invoked by an old script wrapper. This will fail in a 
future version of pip.
   Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the 
underlying issue.
   To avoid this problem you can invoke Python with '-m pip' instead of running 
pip directly.
   DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. 
Please upgrade your Python as Python 2.7 is no longer maintained. A future 
version of pip will drop support for Python 2.7. More details about Python 2 
support in pip, can be found at 
https://pip.pypa.io/en/latest/development/release-process/#python-2-support
   Processing /home/user/temp/mxnet/python
   Requirement already satisfied: numpy<2.0.0,>1.16.0 in 
/home/user/.local/lib/python2.7/site-packages (from mxnet==1.6.0) (1.16.6)
   Requirement already satisfied: requests<3,>=2.20.0 in 
/home/user/.local/lib/python2.7/site-packages (from mxnet==1.6.0) (2.23.0)
   Requirement already satisfied: graphviz<0.9.0,>=0.8.1 in 
/home/user/.local/lib/python2.7/site-packages (from mxnet==1.6.0) (0.8.4)
   Requirement already satisfied: certifi>=2017.4.17 in 
/home/user/.local/lib/python2.7/site-packages (from 
requests<3,>=2.20.0->mxnet==1.6.0) (2019.11.28)
   Requirement already satisfied: idna<3,>=2.5 in 
/home/user/.local/lib/python2.7/site-packages (from 
requests<3,>=2.20.0->mxnet==1.6.0) (2.9)
   Requirement already satisfied: chardet<4,>=3.0.2 in 
/home/user/.local/lib/python2.7/site-packages (from 
requests<3,>=2.20.0->mxnet==1.6.0) (3.0.4)
   Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in 
/home/user/.local/lib/python2.7/site-packages (from 
requests<3,>=2.20.0->mxnet==1.6.0) (1.25.8)
   Building wheels for collected packages: mxnet
 Building wheel for mxnet (setup.py) ... error
 ERROR: Command errored out with exit status 1:
  command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; 
sys.argv[0] = '"'"'/tmp/pip-req-build-ohO0MR/setup.py'"'"'; 
__file__='"'"'/tmp/pip-req-build-ohO0MR/setup.py'"'"';f=getattr(tokenize, 
'"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' 
bdist_wheel -d /tmp/pip-wheel-I12gsz
  cwd: /tmp/pip-req-build-ohO0MR/
 Complete output (21 lines):
 running bdist_wheel
 running build
 running build_py
 creating build
 creating build/lib.linux-x86_64-2.7
 creating build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/operator.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/symbol_doc.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/executor.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/random.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/executor_manager.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/model.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/test_utils.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/lr_scheduler.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/error.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/torch.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/base.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/context.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/tvmop.py -> build/lib.linux-x86_64-2.7/mxnet
 copying mxnet/monitor.py -> build/lib.linux-x86_64-2.7/mxnet
 error: can't copy 'mxnet/space.py': doesn't exist or not a regular file
 
 ERROR: Failed building wheel for mxnet
 Running setup.py clean for mxnet
   Failed to build mxnet
   Installing collected packages: mxnet
 Attempting uninstall: mxnet
   Found existing installation: mxnet 1.6.0
   Uninstalling mxnet-1.6.0:
 Successfully uninstalled mxnet-1.6.0
   Running setup.py install for mxnet ... error
   ERROR: Command errored out with exit status 1:
command: /usr/bin/python -u -c 'import sys, setuptools, tokenize; 
sys.argv[0] = '"'"'/tmp/pip-req-build-ohO0MR/setup.py'"'"'; 
__file__='"'"'/tmp/pip-req-build-ohO0MR/setup.py'"'"';f=getattr(tokenize, 
'"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', 
'"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install 
--record /tmp/pip-record-gg1hhB/install-record.txt 
--single-version-externally-managed --user --prefix= --compile 
--install-headers /home/user/.local/include/python2.7/mxnet
cwd: /tmp/pip-req-build-ohO0MR/
   Complete output (21 lines):
   running install
   running build
   running build_py
   creating build
   

[GitHub] [incubator-mxnet] szha commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work

2020-02-24 Thread GitBox
szha commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work
URL: 
https://github.com/apache/incubator-mxnet/issues/17671#issuecomment-590700576
 
 
   I made py35 available too


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17650: [numpy] add magic methods for bitwise ops

2020-02-24 Thread GitBox
TaoLv commented on issue #17650: [numpy] add magic methods for bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17650#issuecomment-590694562
 
 
   If there is a single commit in the PR, the commit message of the commit will 
be used for merge commit. Otherwise, the PR title and all commit messages will 
be used.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #17650: [numpy] add magic methods for bitwise ops

2020-02-24 Thread GitBox
haojin2 commented on issue #17650: [numpy] add magic methods for bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17650#issuecomment-590688647
 
 
   @leezu That's quite weird, github used to automatically reflect the PR title 
upon merging, maybe that has changed over time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17650: [numpy] add magic methods for bitwise ops

2020-02-24 Thread GitBox
leezu edited a comment on issue #17650: [numpy] add magic methods for bitwise 
ops
URL: https://github.com/apache/incubator-mxnet/pull/17650#issuecomment-590685166
 
 
   @haojin2 @Yiyan66 let's try to have some meaningful commit message. Not sure 
what "ok (#17650)" is supposed to mean ;)
   
   @haojin2 you can change the commit message when merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17650: [numpy] add magic methods for bitwise ops

2020-02-24 Thread GitBox
leezu commented on issue #17650: [numpy] add magic methods for bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17650#issuecomment-590685166
 
 
   @haojin2 let's try to have some meaningful commit message. Not sure what "ok 
(#17650)" is supposed to mean ;)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix (#17674)

2020-02-24 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 31144c7  Fix (#17674)
31144c7 is described below

commit 31144c763bfd0fe199b7fe0f23a20555c9731e7a
Author: reminisce 
AuthorDate: Mon Feb 24 19:58:25 2020 -0800

Fix (#17674)
---
 src/nnvm/plan_memory.cc   | 25 +
 tests/python/unittest/test_numpy_gluon.py | 21 +
 2 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/src/nnvm/plan_memory.cc b/src/nnvm/plan_memory.cc
index 6c6e02d..3815f23 100644
--- a/src/nnvm/plan_memory.cc
+++ b/src/nnvm/plan_memory.cc
@@ -38,21 +38,22 @@ namespace {
 // Return bytes of data flag.
 static int MXGetDTypeSize(int type_flag) {
   switch (type_flag) {
-case kUint8:
-case kInt8:
+case mshadow::kUint8:
+case mshadow::kInt8:
+case mshadow::kBool:
   return 1;
-case kFloat16:
-case kBfloat16:
-case kInt16:
-case kUint16:
+case mshadow::kFloat16:
+case mshadow::kBfloat16:
+case mshadow::kInt16:
+case mshadow::kUint16:
   return 2;
-case kFloat32:
-case kInt32:
-case kUint32:
+case mshadow::kFloat32:
+case mshadow::kInt32:
+case mshadow::kUint32:
   return 4;
-case kFloat64:
-case kInt64:
-case kUint64:
+case mshadow::kFloat64:
+case mshadow::kInt64:
+case mshadow::kUint64:
   return 8;
 default:
   LOG(FATAL) << "unknown type_flag=" << type_flag;
diff --git a/tests/python/unittest/test_numpy_gluon.py 
b/tests/python/unittest/test_numpy_gluon.py
index 6ce9e18..0d1e5fe 100644
--- a/tests/python/unittest/test_numpy_gluon.py
+++ b/tests/python/unittest/test_numpy_gluon.py
@@ -400,6 +400,27 @@ def test_net_symbol_save_load():
   mx.np.random.normal(0, 1, (10, 5, 8))])
 
 
+@with_seed()
+@use_np
+def test_hybridize_boolean_dtype():
+class Foo(gluon.HybridBlock):
+def __init__(self, prefix=None, params=None):
+super(Foo, self).__init__(prefix=prefix, params=params)
+
+def hybrid_forward(self, F, valid_length):
+mask = ((F.np.ones((10,)) / 2) < valid_length)
+return mask
+
+valid_length = mx.np.random.uniform(size=(10,))
+foo = Foo()
+out1 = foo(valid_length)
+
+foo = Foo()
+foo.hybridize()
+out2 = foo(valid_length)
+
+assert mx.test_utils.same(out1.asnumpy(), out2.asnumpy())
+
 
 if __name__ == '__main__':
 import nose



[GitHub] [incubator-mxnet] reminisce merged pull request #17674: Fix boolean dtype error in hybridization

2020-02-24 Thread GitBox
reminisce merged pull request #17674: Fix boolean dtype error in hybridization
URL: https://github.com/apache/incubator-mxnet/pull/17674
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce closed issue #17638: [Numpy] unknown type_flag=7

2020-02-24 Thread GitBox
reminisce closed issue #17638: [Numpy] unknown type_flag=7
URL: https://github.com/apache/incubator-mxnet/issues/17638
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu closed issue #17678: Cuda 10.2 Ubuntu 18 mxnet installation error

2020-02-24 Thread GitBox
leezu closed issue #17678: Cuda 10.2 Ubuntu 18  mxnet installation error 
URL: https://github.com/apache/incubator-mxnet/issues/17678
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17678: Cuda 10.2 Ubuntu 18 mxnet installation error

2020-02-24 Thread GitBox
leezu commented on issue #17678: Cuda 10.2 Ubuntu 18  mxnet installation error 
URL: 
https://github.com/apache/incubator-mxnet/issues/17678#issuecomment-590670328
 
 
   The instruction is wrong. You need to ` pip install --pre mxnet_cu102 -f 
https://dist.mxnet.io/python/cu102mkl`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17678: Cuda 10.2 Ubuntu 18 mxnet installation error

2020-02-24 Thread GitBox
leezu edited a comment on issue #17678: Cuda 10.2 Ubuntu 18  mxnet installation 
error 
URL: 
https://github.com/apache/incubator-mxnet/issues/17678#issuecomment-590670328
 
 
   The instruction is wrong. You need to ` pip install --pre mxnet_cu102mkl -f 
https://dist.mxnet.io/python/cu102mkl`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xinyu-intel opened a new pull request #17679: [MKL-DNN] BatchNormRelu Fusion

2020-02-24 Thread GitBox
xinyu-intel opened a new pull request #17679: [MKL-DNN] BatchNormRelu Fusion
URL: https://github.com/apache/incubator-mxnet/pull/17679
 
 
   ## Description ##
   BatchNormRelu Fusion while training.
   
   usage:
   
   ```
   nn.BatchNorm(fuse_relu=True, **kwargs)
   ```
   
   @pengzhao-intel @TaoLv 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (fc778fc -> 4634577)

2020-02-24 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fc778fc  cmake: copy dnnl headers to include/mkldnn (#17647)
 add 4634577  ok (#17650)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy/multiarray.py | 52 +---
 1 file changed, 49 insertions(+), 3 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17650: [numpy] add magic methods for bitwise ops

2020-02-24 Thread GitBox
haojin2 merged pull request #17650: [numpy] add magic methods for bitwise ops
URL: https://github.com/apache/incubator-mxnet/pull/17650
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-24 Thread GitBox
ChaiBapchya edited a comment on issue #17487: [OpPerf] Consolidate array 
manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#issuecomment-582123433
 
 
   All 5 categories 
   ```
   >>> from benchmark.opperf.nd_operations.array_manipulation_operators import 
run_rearrange_operators_benchmarks, run_shape_operators_benchmarks, 
run_expanding_operators_benchmarks, run_rounding_operators_benchmarks
   ```
   Results
   ```
   run_expanding_operators_benchmarks()
   INFO:root:Begin Benchmark - broadcast_axes
   INFO:root:Complete Benchmark - broadcast_axes
   INFO:root:Begin Benchmark - broadcast_axis
   INFO:root:Complete Benchmark - broadcast_axis
   INFO:root:Begin Benchmark - broadcast_like
   INFO:root:Complete Benchmark - broadcast_like
   INFO:root:Begin Benchmark - broadcast_to
   INFO:root:Complete Benchmark - broadcast_to
   INFO:root:Begin Benchmark - expand_dims
   INFO:root:Complete Benchmark - expand_dims
   INFO:root:Begin Benchmark - pad
   INFO:root:Complete Benchmark - pad
   INFO:root:Begin Benchmark - repeat
   INFO:root:Complete Benchmark - repeat
   INFO:root:Begin Benchmark - tile
   INFO:root:Complete Benchmark - tile
   {'broadcast_axis': [{'avg_time_forward_broadcast_axis': 0.0342, 
'max_storage_mem_alloc_cpu/0': 4.096, 'inputs': {'data': (1, 1024), 'axis': 0, 
'size': 2}}, {'avg_time_forward_broadcast_axis': 0.0302, 
'max_storage_mem_alloc_cpu/0': 0.008, 'inputs': {'data': (1, 1), 'axis': 0, 
'size': 2}}, {'avg_time_forward_broadcast_axis': 0.024, 
'max_storage_mem_alloc_cpu/0': 0.8, 'inputs': {'data': (1, 100), 'axis': 0, 
'size': 2}}], 'broadcast_like': [{'avg_time_forward_broadcast_like': 1.5138, 
'max_
   storage_mem_alloc_cpu/0': 4194.3042, 'inputs': {'lhs': (1024, 1024), 'rhs': 
(1024, 1024)}}, {'avg_time_forward_broadcast_like': 0.1705, 
'max_storage_mem_alloc_cpu/0': 400.0, 'inputs': {'lhs': (1, 10), 'rhs': 
(1, 10)}}, {'avg_time_forward_broadcast_like': 0.0446, 
'max_storage_mem_alloc_cpu/0': 20.0, 'inputs': {'lhs': (1, 1), 'rhs': 
(1, 1)}}], 'pad': [{'max_storage_mem_alloc_cpu/0': 0.192, 'inputs': 
{'data': (1, 4, 2, 4), 'mode': 'constant', 'pad_width': (0, 0, 0, 0, 1, 1
   , 1, 1)}}, {'max_storage_mem_alloc_cpu/0': 612.0, 'inputs': {'data': (10, 
25, 10, 100), 'mode': 'constant', 'pad_width': (0, 0, 0, 0, 1, 1, 1, 1)}}], 
'repeat': [{'avg_time_forward_repeat': 7.5347, 'avg_time_backward_repeat': 
10.3592, 'max_storage_mem_alloc_cpu/0': 4194.3042, 'inputs': {'data': (1024, 
1024), 'repeats': 2, 'axis': 0}}, {'avg_time_forward_repeat': 0.0664, 
'avg_time_backward_repeat': 0.0716, 'max_storage_mem_alloc_cpu/0': 40.0, 
'inputs': {'data': (1, 1), 'repeats': 2,
   'axis': 0}}, {'avg_time_forward_repeat': 6.0047, 'avg_time_backward_repeat': 
8.3208, 'max_storage_mem_alloc_cpu/0': 4000.0, 'inputs': {'data': (1, 100), 
'repeats': 2, 'axis': 0}}], 'tile': [{'avg_time_backward_tile': 7.2161, 
'max_storage_mem_alloc_cpu/0': 4194.3042, 'avg_time_forward_tile': 5.2652, 
'inputs': {'data': (1024, 1024), 'reps': 2}}, {'avg_time_backward_tile': 
0.0631, 'max_storage_mem_alloc_cpu/0': 40.0, 'avg_time_forward_tile': 0.1274, 
'inputs': {'data': (1, 1), 'rep
   s': 2}}, {'avg_time_backward_tile': 6.7835, 'max_storage_mem_alloc_cpu/0': 
4000.0, 'avg_time_forward_tile': 4.8181, 'inputs': {'data': (1, 100), 
'reps': 2}}], 'broadcast_to': [{'max_storage_mem_alloc_cpu/0': 2097.1521, 
'avg_time_forward_broadcast_to': 1.4573, 'inputs': {'data': (1, 1024), 'shape': 
(1024, 1024)}}, {'max_storage_mem_alloc_cpu/0': 40.0, 
'avg_time_forward_broadcast_to': 0.0741, 'inputs': {'data': (1, 1), 'shape': 
(1, 1)}}, {'max_storage_mem_alloc_cpu/0': 2000.0, 'a
   vg_time_forward_broadcast_to': 1.5039, 'inputs': {'data': (1, 100), 'shape': 
(1, 100)}}], 'expand_dims': [{'avg_time_forward_expand_dims': 0.15, 
'max_storage_mem_alloc_cpu/0': 2097.1521, 'inputs': {'data': (1024, 1024), 
'axis': 0}}, {'avg_time_forward_expand_dims': 0.029, 
'max_storage_mem_alloc_cpu/0': 20.0, 'inputs': {'data': (1, 1), 'axis': 
0}}, {'avg_time_forward_expand_dims': 0.0524, 'max_storage_mem_alloc_cpu/0': 
2000.0, 'inputs': {'data': (1, 100), 'axis': 0}}], 'broa
   dcast_axes': [{'avg_time_forward_broadcast_axes': 0.0416, 
'max_storage_mem_alloc_cpu/0': 4.096, 'inputs': {'data': (1, 1024), 'axis': 0, 
'size': 2}}, {'avg_time_forward_broadcast_axes': 0.0341, 
'max_storage_mem_alloc_cpu/0': 0.004, 'inputs': {'data': (1, 1), 'axis': 0, 
'size': 2}}, {'avg_time_forward_broadcast_axes': 0.0354, 
'max_storage_mem_alloc_cpu/0': 0.4, 'inputs': {'data': (1, 100), 'axis': 0, 
'size': 2}}]}
   ```
   ```
   run_rearrange_operators_benchmarks()
   INFO:root:Begin Benchmark - SwapAxis
   INFO:root:Complete Benchmark - SwapAxis
   INFO:root:Begin Benchmark - depth_to_space
   INFO:root:Complete Benchmark - depth_to_space
   INFO:root:Begin Benchmark - flip
   INFO:root:Complete Benchmark - flip
   INFO:root:Begin Benchmark - reverse
   INFO:root:Complete Benchmark 

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16891: Upgrading MKLDNN to 1.0 causes performance regression.

2020-02-24 Thread GitBox
pengzhao-intel commented on issue #16891: Upgrading MKLDNN to 1.0 causes 
performance regression.
URL: 
https://github.com/apache/incubator-mxnet/issues/16891#issuecomment-590647460
 
 
   > Hi @pengzhao-intel, in MXNet 2.0 Cmake is planned to be the only build 
system: https://github.com/apache/incubator-mxnet/projects/18#card-30594044
   > 
   > Would that address the cons in Option 2?
   
   It's a good chance to make the system clean :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] i55code opened a new issue #17678: Cuda10.2 installation error

2020-02-24 Thread GitBox
i55code opened a new issue #17678: Cuda10.2 installation error 
URL: https://github.com/apache/incubator-mxnet/issues/17678
 
 
   Hi, I have followed this post 
(https://github.com/apache/incubator-mxnet/issues/17058) to install the nightly 
convenience builds using: 
   pip install --pre mxnet -f https://dist.mxnet.io/python/cu102mkl
   
   However, when I test the following code: 
   >>> import mxnet as mx
   >>> a = mx.nd.ones((2, 3), mx.gpu())
   
   The error message is : 
   >>> import mxnet as mx
   >>> a = mx.nd.ones((2, 3), mx.gpu())
   [20:56:28] src/imperative/./imperative_utils.h:92: GPU support is disabled. 
Compile MXNet with USE_CUDA=1 to enable GPU support.
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [20:56:28] src/imperative/imperative.cc:81: Operator _ones is not 
implemented for GPU.
   Stack trace:
 [bt] (0) 
/home/.local/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x307d3b) 
[0x7f1a6fb0fd3b]
 [bt] (1) 
/home/.local/lib/python3.6/site-packages/mxnet/libmxnet.so(mxnet::Imperative::InvokeOp(mxnet::Context
 const&, nnvm::NodeAttrs const&, std::vector > const&, std::vector > const&, std::vector > const&, mxnet::DispatchMode, 
mxnet::OpStatePtr)+0x6bb) [0x7f1a72cb1c5b]
   
   
   Aborted (core dumped)
   
   Please let me know how to install mxnet properly for Cuda 10.2 in ubuntu 
18.04. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
TaoLv commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590646295
 
 
   > So it seems link order is important?
   
   I think so. I remember @icemelon9 also met the problem that importing 
pytorch before mxnet in a single python script will slow down the execution of 
mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
TaoLv commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590645896
 
 
   @leezu you may also want to take a look at the discussion around here: 
https://github.com/apache/incubator-mxnet/issues/16891#issuecomment-567051540 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
TaoLv commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590645312
 
 
   @leezu I need take time to go through this discussion. But do you think 
changing MKL to static link will solve your question? Please see link adviser 
here: https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
leezu commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590642092
 
 
   @TaoLv @pengzhao-intel can you advise about the mkl dynamic loading of iomp? 
What's your recommendation to fix this issue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b7c1c8d -> fc778fc)

2020-02-24 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b7c1c8d  [CI] Add AMI id to instance info on builds (#17649)
 add fc778fc  cmake: copy dnnl headers to include/mkldnn (#17647)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt | 8 
 1 file changed, 8 insertions(+)



[GitHub] [incubator-mxnet] leezu merged pull request #17647: cmake: copy dnnl headers to include/mkldnn

2020-02-24 Thread GitBox
leezu merged pull request #17647: cmake: copy dnnl headers to include/mkldnn
URL: https://github.com/apache/incubator-mxnet/pull/17647
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu closed issue #17628: Missing dnnl_config.h when installing horovod

2020-02-24 Thread GitBox
leezu closed issue #17628: Missing dnnl_config.h when installing horovod
URL: https://github.com/apache/incubator-mxnet/issues/17628
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17647: cmake: copy dnnl headers to include/mkldnn

2020-02-24 Thread GitBox
TaoLv commented on issue #17647: cmake: copy dnnl headers to include/mkldnn
URL: https://github.com/apache/incubator-mxnet/pull/17647#issuecomment-590640927
 
 
   Other headers (except dnnl_config.h and dnnl_version.h) are statically 
distributed in DNNL repo but dnnl_config.h and dnnl_version.h are generated 
during compilation. All these headers should be installed to the some place 
when make install.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17647: cmake: copy dnnl headers to include/mkldnn

2020-02-24 Thread GitBox
leezu edited a comment on issue #17647: cmake: copy dnnl headers to 
include/mkldnn
URL: https://github.com/apache/incubator-mxnet/pull/17647#issuecomment-590639354
 
 
   Why not use symbolic link as for the other include files?
   
   With this PR, `include/mkldnn` looks as follows:
   
   ```
   ls -l include/mkldnn 
  
~/src/mxnet-master cp_dnnl_headers ip-172-31-95-96
   total 8
   lrwxrwxrwx 1 ubuntu ubuntu   36 Feb 25 01:46 dnnl.h -> 
../../3rdparty/mkldnn/include/dnnl.h
   lrwxrwxrwx 1 ubuntu ubuntu   38 Feb 15 01:08 dnnl.hpp -> 
../../3rdparty/mkldnn/include/dnnl.hpp
   -rw-rw-r-- 1 ubuntu ubuntu 2603 Feb 25 01:45 dnnl_config.h
   lrwxrwxrwx 1 ubuntu ubuntu   42 Feb 15 01:08 dnnl_types.h -> 
../../3rdparty/mkldnn/include/dnnl_types.h
   -rw-rw-r-- 1 ubuntu ubuntu 1074 Feb 25 01:45 dnnl_version.h
   lrwxrwxrwx 1 ubuntu ubuntu   38 Feb 15 01:00 mkldnn.h -> 
../../3rdparty/mkldnn/include/mkldnn.h
   lrwxrwxrwx 1 ubuntu ubuntu   40 Feb 15 01:08 mkldnn.hpp -> 
../../3rdparty/mkldnn/include/mkldnn.hpp
   lrwxrwxrwx 1 ubuntu ubuntu   52 Feb 15 01:08 mkldnn_dnnl_mangling.h -> 
../../3rdparty/mkldnn/include/mkldnn_dnnl_mangling.h
   lrwxrwxrwx 1 ubuntu ubuntu   44 Feb 15 01:00 mkldnn_types.h -> 
../../3rdparty/mkldnn/include/mkldnn_types.h
   lrwxrwxrwx 1 ubuntu ubuntu   46 Feb 15 01:08 mkldnn_version.h -> 
../../3rdparty/mkldnn/include/mkldnn_version.h
   ```
   
   Both approaches seems fine, but let's be consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17647: cmake: copy dnnl headers to include/mkldnn

2020-02-24 Thread GitBox
leezu commented on issue #17647: cmake: copy dnnl headers to include/mkldnn
URL: https://github.com/apache/incubator-mxnet/pull/17647#issuecomment-590639354
 
 
   Why not use symbolic link as for the other include files?
   
   With this PR, `include/mkldnn` looks as follows:
   
   ```
   ls -l include/mkldnn 
  
~/src/mxnet-master cp_dnnl_headers ip-172-31-95-96
   total 8
   lrwxrwxrwx 1 ubuntu ubuntu   36 Feb 25 01:46 dnnl.h -> 
../../3rdparty/mkldnn/include/dnnl.h
   lrwxrwxrwx 1 ubuntu ubuntu   38 Feb 15 01:08 dnnl.hpp -> 
../../3rdparty/mkldnn/include/dnnl.hpp
   -rw-rw-r-- 1 ubuntu ubuntu 2603 Feb 25 01:45 dnnl_config.h
   lrwxrwxrwx 1 ubuntu ubuntu   42 Feb 15 01:08 dnnl_types.h -> 
../../3rdparty/mkldnn/include/dnnl_types.h
   -rw-rw-r-- 1 ubuntu ubuntu 1074 Feb 25 01:45 dnnl_version.h
   lrwxrwxrwx 1 ubuntu ubuntu   38 Feb 15 01:00 mkldnn.h -> 
../../3rdparty/mkldnn/include/mkldnn.h
   lrwxrwxrwx 1 ubuntu ubuntu   40 Feb 15 01:08 mkldnn.hpp -> 
../../3rdparty/mkldnn/include/mkldnn.hpp
   lrwxrwxrwx 1 ubuntu ubuntu   52 Feb 15 01:08 mkldnn_dnnl_mangling.h -> 
../../3rdparty/mkldnn/include/mkldnn_dnnl_mangling.h
   lrwxrwxrwx 1 ubuntu ubuntu   44 Feb 15 01:00 mkldnn_types.h -> 
../../3rdparty/mkldnn/include/mkldnn_types.h
   lrwxrwxrwx 1 ubuntu ubuntu   46 Feb 15 01:08 mkldnn_version.h -> 
../../3rdparty/mkldnn/include/mkldnn_version.h
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] astonzhang removed a comment on issue #17658: [WIP] Update website, README and NEWS with 1.6.0

2020-02-24 Thread GitBox
astonzhang removed a comment on issue #17658: [WIP] Update website, README and 
NEWS with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-590629334
 
 
   @ptrendx It seems that the 1.6.0 does not work on MacOS. I `pip install 
mxnet==1.6.0` on my macOS Mojave, then `import mxnet` will show errors. My 
friends also encounter the same problem on their macOS. Can you double check it 
on your side?
   
   Details of error:
   
   ```
   Python 3.7.3 (default, Mar 27 2019, 16:54:48) 
   [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet
   Traceback (most recent call last):
 File "", line 1, in 
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/__init__.py", 
line 24, in 
   from .context import Context, current_context, cpu, gpu, cpu_pinned
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/context.py", 
line 24, in 
   from .base import classproperty, with_metaclass, 
_MXClassPropertyMetaClass
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/base.py", 
line 214, in 
   _LIB = _load_lib()
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/base.py", 
line 205, in _load_lib
   lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
 File "/Users/astonz/miniconda3/lib/python3.7/ctypes/__init__.py", line 
356, in __init__
   self._handle = _dlopen(self._name, mode)
   OSError: 
dlopen(/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so, 6): 
no suitable image found.  Did find:
/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so: 
unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so: 
unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
   >>> exit()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (12d9191 -> b7c1c8d)

2020-02-24 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 12d9191  [numpy] add op fabs, sometrue, round_ (#17619)
 add b7c1c8d  [CI] Add AMI id to instance info on builds (#17649)

No new revisions were added by this update.

Summary of changes:
 ci/build_windows.py |  3 +++
 ci/util.py  | 19 ++-
 2 files changed, 13 insertions(+), 9 deletions(-)



[GitHub] [incubator-mxnet] leezu merged pull request #17649: add AMI id to instance info on builds

2020-02-24 Thread GitBox
leezu merged pull request #17649: add AMI id to instance info on builds
URL: https://github.com/apache/incubator-mxnet/pull/17649
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] astonzhang commented on issue #17658: [WIP] Update website, README and NEWS with 1.6.0

2020-02-24 Thread GitBox
astonzhang commented on issue #17658: [WIP] Update website, README and NEWS 
with 1.6.0
URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-590629334
 
 
   @ptrendx It seems that the 1.6.0 does not work on MacOS. I `pip install 
mxnet==1.6.0` on my macOS Mojave, then `import mxnet` will show errors. My 
friends also encounter the same problem on their macOS. Can you double check it 
on your side?
   
   Details of error:
   
   ```
   Python 3.7.3 (default, Mar 27 2019, 16:54:48) 
   [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import mxnet
   Traceback (most recent call last):
 File "", line 1, in 
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/__init__.py", 
line 24, in 
   from .context import Context, current_context, cpu, gpu, cpu_pinned
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/context.py", 
line 24, in 
   from .base import classproperty, with_metaclass, 
_MXClassPropertyMetaClass
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/base.py", 
line 214, in 
   _LIB = _load_lib()
 File "/Users/astonz/.local/lib/python3.7/site-packages/mxnet/base.py", 
line 205, in _load_lib
   lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL)
 File "/Users/astonz/miniconda3/lib/python3.7/ctypes/__init__.py", line 
356, in __init__
   self._handle = _dlopen(self._name, mode)
   OSError: 
dlopen(/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so, 6): 
no suitable image found.  Did find:
/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so: 
unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
/Users/astonz/.local/lib/python3.7/site-packages/mxnet/libmxnet.so: 
unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
   >>> exit()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-24 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 90e3799  Bump the publish timestamp.
90e3799 is described below

commit 90e3799f77ae50a748cfc03d7db4545f263e3917
Author: mxnet-ci 
AuthorDate: Tue Feb 25 01:03:34 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..0efed81
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Feb 25 01:03:34 UTC 2020



[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17587: Need help: how to manually build whl

2020-02-24 Thread GitBox
ChaiBapchya commented on issue #17587: Need help: how to manually build whl
URL: 
https://github.com/apache/incubator-mxnet/issues/17587#issuecomment-590627221
 
 
   Working on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work

2020-02-24 Thread GitBox
leezu commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work
URL: 
https://github.com/apache/incubator-mxnet/issues/17671#issuecomment-590621951
 
 
   How about py35?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work

2020-02-24 Thread GitBox
szha commented on issue #17671: mxnet=1.6.0 for OSX on pypi does not work
URL: 
https://github.com/apache/incubator-mxnet/issues/17671#issuecomment-590621801
 
 
   wheels for osx for py36 through py38 have been uploaded. the linux wheel is 
a long-standing problem as py2.py3 tag is only supported for universal wheels 
with any


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (f9b2a63 -> 12d9191)

2020-02-24 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f9b2a63  [Large Tensor] Fixed col2im op (#17622)
 add 12d9191  [numpy] add op fabs, sometrue, round_ (#17619)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py  | 13 +
 python/mxnet/ndarray/numpy/_op.py  | 60 --
 python/mxnet/numpy/multiarray.py   | 54 +--
 python/mxnet/numpy_dispatch_protocol.py|  3 ++
 python/mxnet/symbol/numpy/_symbol.py   | 50 --
 .../numpy/np_broadcast_reduce_op_boolean.cc|  1 +
 .../python/unittest/test_numpy_interoperability.py | 26 ++
 tests/python/unittest/test_numpy_op.py | 37 ++---
 8 files changed, 217 insertions(+), 27 deletions(-)



[incubator-mxnet] branch master updated (f9b2a63 -> 12d9191)

2020-02-24 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f9b2a63  [Large Tensor] Fixed col2im op (#17622)
 add 12d9191  [numpy] add op fabs, sometrue, round_ (#17619)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/_numpy_op_doc.py  | 13 +
 python/mxnet/ndarray/numpy/_op.py  | 60 --
 python/mxnet/numpy/multiarray.py   | 54 +--
 python/mxnet/numpy_dispatch_protocol.py|  3 ++
 python/mxnet/symbol/numpy/_symbol.py   | 50 --
 .../numpy/np_broadcast_reduce_op_boolean.cc|  1 +
 .../python/unittest/test_numpy_interoperability.py | 26 ++
 tests/python/unittest/test_numpy_op.py | 37 ++---
 8 files changed, 217 insertions(+), 27 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #17619: [numpy] add op fabs, sometrue, round_

2020-02-24 Thread GitBox
haojin2 merged pull request #17619: [numpy] add op fabs, sometrue, round_
URL: https://github.com/apache/incubator-mxnet/pull/17619
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #17530: Add deferred compute support

2020-02-24 Thread GitBox
zhreshold commented on a change in pull request #17530: Add deferred compute 
support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r383590278
 
 

 ##
 File path: src/c_api/c_api_ndarray.cc
 ##
 @@ -107,9 +107,17 @@ void MXImperativeInvokeImpl(AtomicSymbolCreator creator,
   SetNDInputsOutputs(op, , , num_inputs, inputs,
   num_outputs, infered_num_outputs, num_visible_outputs, outputs);
 
-  auto state = Imperative::Get()->Invoke(Context::CPU(), attrs, ndinputs, 
ndoutputs);
-  if (Imperative::Get()->is_recording()) {
-Imperative::Get()->RecordOp(std::move(attrs), ndinputs, ndoutputs, state);
+  if (Imperative::Get()->is_deferred_compute()) {
+Imperative::Get()->RecordDeferredCompute(std::move(attrs), ndinputs, 
ndoutputs);
+  } else {
+for (NDArray* input : ndinputs) {
+  Imperative::DCInfo::Compute(*input);
+}
+auto state = Imperative::Get()->Invoke(
+  Context::CPU(), attrs, ndinputs, ndoutputs);
 
 Review comment:
   is it assumes the context is always CPU?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on a change in pull request #17530: Add deferred compute support

2020-02-24 Thread GitBox
zhreshold commented on a change in pull request #17530: Add deferred compute 
support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r383592008
 
 

 ##
 File path: src/imperative/imperative.cc
 ##
 @@ -544,4 +705,60 @@ std::vector Imperative::Backward(
   return {};
 }
 
+Imperative::DCInfo::DCInfo(const std::vector ,
+   const std::vector ) {
+  this->inputs_.reserve(inputs.size());
+  this->input_handles_.reserve(inputs.size());
+  for (const NDArray *arr : inputs) {
+CHECK(!arr->is_none());
+this->inputs_.push_back(*arr);
+this->input_handles_.push_back(arr);
+  }
+
+  this->outputs_.reserve(outputs.size());
+  for (const NDArray *arr : outputs) {
+CHECK(!arr->is_none());
+this->outputs_.push_back(*arr);
+  }
+}
+
+Imperative::DCInfo &
+Imperative::DCInfo::Create(const nnvm::ObjectPtr ,
+   const std::vector ,
+   const std::vector ) {
+  node->info.construct(inputs, outputs);
+  return Imperative::DCInfo::Get(node);
+}
+
+void Imperative::DCInfo::Compute(const NDArray ) {
+  if (Imperative::DCInfo::IsComputed(arr))
+return;
+
+  DCInfo  = Imperative::DCInfo::Get(arr.deferredcompute_entry_.node);
+  info.is_computed_ = true;  // We will Invoke at the end of this function.
+
+  // Recursively compute input arrays
+  for (const NDArray  : info.inputs_) {
+Compute(input);
+  }
+
+  // Prepare pointers
+  std::vector ndinputs, ndoutputs;
+  ndinputs.reserve(info.inputs_.size());
+  ndoutputs.reserve(info.outputs_.size());
+  for (NDArray  : info.inputs_)
+ndinputs.push_back();
 
 Review comment:
   can you point me to the machanism for maintaining pointers to ndarray valid 
through the lifetime of DCInfo?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
connorgoggins commented on a change in pull request #17632: [Large Tensor] 
Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383575441
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -278,9 +278,9 @@ void RNNForwardTraining(DType* ws,
 bool state_outputs,
 const int num_layers,
 const int direction,
-const int seq_length,
-const int batch_size,
-const int input_size,
+const index_t seq_length,
+const index_t batch_size,
+const index_t input_size,
 
 Review comment:
   Excellent point, updating now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
connorgoggins commented on a change in pull request #17632: [Large Tensor] 
Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383575372
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -361,9 +361,9 @@ void RNNBackward(DType* ws,
  DType* rs,
  const int num_layers,
  const int direction,
- const int seq_length,
- const int batch_size,
- const int input_size,
+ const index_t seq_length,
+ const index_t batch_size,
+ const index_t input_size,
 
 Review comment:
   Excellent point, updating now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17677: [Large Tensor] Fix cumsum op

2020-02-24 Thread GitBox
connorgoggins commented on issue #17677: [Large Tensor] Fix cumsum op
URL: https://github.com/apache/incubator-mxnet/pull/17677#issuecomment-590597021
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17587: Need help: how to manually build whl

2020-02-24 Thread GitBox
leezu commented on issue #17587: Need help: how to manually build whl
URL: 
https://github.com/apache/incubator-mxnet/issues/17587#issuecomment-590596729
 
 
   @hkvision this is a known issue. Workaround: 
https://github.com/apache/incubator-mxnet/issues/16933#issuecomment-561461776
   
   @ChaiBapchya if you could help fix it, that would be great.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins opened a new pull request #17677: [Large Tensor] Fix cumsum op

2020-02-24 Thread GitBox
connorgoggins opened a new pull request #17677: [Large Tensor] Fix cumsum op
URL: https://github.com/apache/incubator-mxnet/pull/17677
 
 
   ## Description ##
   The cumsum op was previously breaking on large tensor (dimension > 2^32) 
data. With the following input:
   ```
   run_performance_test(nd.cumsum, inputs=[{"a": nd.random_normal(shape=(2**32 
+ 1, 1))}], run_backward=True, warmup=1, runs=1)
   ```
   the following error was thrown:
   ```
   Segmentation fault (core dumped)
   ```
   
   To root cause this issue, I ran the previous command in a Python script with 
GDB, and found that the underlying problem was in the data type of several 
variables in the forward/backward structs in `np_cumsum-inl.h.h`. These 
variables used the `int` dtype when they should have been using `index_t` to 
properly handle long int indices. I switched these variables to `index_t` in 
the struct header and, after rebuilding, the previous input command displayed 
the correct output:
   ```
   INFO:root:Begin Benchmark - cumsum
   INFO:root:Complete Benchmark - cumsum
   [{'cumsum': [{'inputs': {'a': ''}, 
'max_storage_mem_alloc_cpu/0': 33285996.0, 'avg_time_forward_cumsum': 
4366.7148, 'avg_time_backward_cumsum': 12744.9971}]}]
   ```
   
   To ensure completeness and to prevent future breaking changes, I also added 
a nightly test for the cumsum op with large tensor data in 
`tests/nightly/test_large_array.py`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - M src/operator/numpy/np_cumsum-inl.h
   - M tests/nightly/test_large_array.py
   
   ## Comments ##
   Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
   1. Individual op run
   2. Full OpPerf run
   
   ## Results ##
   The key difference between CPU and GPU tests was the instance type 
(r5dn.24xl for CPU, p2.16xl for GPU). All relevant build flags remain the same, 
and both were tested using CPU context.
   
   [Single operator test - cumsum op (GPU)]() - pending
   [Single operator test - cumsum op 
(CPU)](https://gist.github.com/connorgoggins/7d1a5912512b1dbbacff350e3cd576ee)
   
   [Full OpPerf test (GPU)]() - pending
   [Full OpPerf test (CPU)]() - pending
   
   @apeforest @access2rohit @ChaiBapchya 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy closed pull request #15694: Improve openblas CMake logic, add generic blas option.

2020-02-24 Thread GitBox
larroy closed pull request #15694: Improve openblas CMake logic, add generic 
blas option.
URL: https://github.com/apache/incubator-mxnet/pull/15694
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #15694: Improve openblas CMake logic, add generic blas option.

2020-02-24 Thread GitBox
larroy commented on issue #15694: Improve openblas CMake logic, add generic 
blas option.
URL: https://github.com/apache/incubator-mxnet/pull/15694#issuecomment-590592861
 
 
   Thanks for the reviews. Unfortunately I don't have time to keep pushing this 
PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha opened a new issue #17676: [RFC] MXNet 2.0 API Deprecation

2020-02-24 Thread GitBox
szha opened a new issue #17676: [RFC] MXNet 2.0 API Deprecation
URL: https://github.com/apache/incubator-mxnet/issues/17676
 
 
   As the MXNet community is working on the next major version of MXNet as 
described in #16167, this RFC seeks to clarify the scope of API deprecation, to 
inform the community of the replacement API design, and to ensure informed 
consensus.
   
   Thanks to the long history of MXNet and the accumulated efforts of the 
community, MXNet now supports a wide range of neural network model training and 
deployment use cases. Many of these use cases have seen several generations of 
API design and implementation. Take model training as an example, there have 
been the Symbol Model API, Symbol Module API, and Gluon Hybrid Block API, all 
of which coexist in MXNet. Older generations of API often have a significant 
body of users and thus require time from the community to maintain, even though 
the supported use cases are mostly covered by a newer generation of API. Such 
requirement for maintenance not only consumes time and energy of the MXNet 
community and can distract the community from its longer term goal, but also 
causes pressure on CI, binary distribution.
   
   In this RFC, we list several candidate API to be deprecated and the 
corresponding new generation of API as replacement. Unless otherwise stated, 
these APIs will continue to be supported in the future 1.x releases that happen 
in parallel to the 2.0 development. On the other hand, participating in the RFC 
for the new replacement feature of the feature you are interested in is the 
best way to ensure continued support in 2.0 for that feature. To make it easier 
to navigate, the replacement feature RFCs are linked in each section.
   
   To make the discussion more productive, I recommend the following actions:
   
   * If a feature in MXNet that you are interested in currently depend on any 
of the deprecated API, and you plan to switch from 1.x to 2.0, please 
participate in the RFC for the replacement feature. Please also direct any 
related questions with respect to how the replacement feature covers your use 
cases to the replacement feature RFCs.
   * If after discussion in the replacement RFCs, you believe that the 
replacement feature cannot replace the candidate feature for deprecation, 
please call out in this RFC.
   * Make sure to include your argument on why it’s the case, and clarify 
what use case cannot be supported in the new feature.
   * If you wish to commit time and sponsor the continued maintenance 
beyond what’s specified in the following sections, please state so along with 
your comment.
   * You may also seek other community members to sponsor the feature as 
comments in this RFC.
   * The group of sponsors needs to collectively clarify what additional 
support the feature will receive from them, and commit to the cost of 
maintenance, development, and operations (such as CI).
   
   
   Please always keep the discussion civilized and informative. Comments 
otherwise will be folded.
   
   ## mxnet.numpy and mxnet.ndarray
   
   Traditionally MXNet provided `mx.nd` API with operators inspired, but often 
incompatible with Numpy. Based on RFC 
[#14253](https://github.com/apache/incubator-mxnet/issues/14253) there has been 
a large and ongoing effort to provide Numpy compatible operators in `mx.np` 
namespace.
   This means that MXNet currently provides two incompatible APIs with separate 
backend implementations achieving similar goals, doubling the maintenance 
burden of developers. Note that there are some deep learning operators in 
`mx.nd` that don't have the counterparts in `mx.np`. These operators will be 
migrated to `mx.npx` namespace and will be tracked in #17096.
   
   Given the wide impact of this decision, these people convened on 2/19/2020 
and reached consensus on recommending **Removal** and parallel maintenance of 
1.x and 2.x as the option forward: @eric-haibin-lin, @mli, @haojin2, @szhengac, 
@yizhiliu, @sxjscience, @reminisce, @leezu, @zhreshold, @apeforest, @oorqueda, 
@rondogency
   
   | Options | Removal  

   | Deprecation




 | Separate Compatibility API   


   

[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17675: [Large Tensor] Fix multi_lars op

2020-02-24 Thread GitBox
connorgoggins commented on a change in pull request #17675: [Large Tensor] Fix 
multi_lars op
URL: https://github.com/apache/incubator-mxnet/pull/17675#discussion_r383560463
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -455,6 +455,20 @@ def npy_instance_norm(data, gamma, beta, axis, eps=1E-5):
 assert_almost_equal(out, out_nd.asnumpy(), forward_check_eps,
 forward_check_eps)
 
+def check_multi_lars():
+lrs = nd.random_normal(shape=(2**32 + 1, 1))
+weights_sum_sq = nd.random_normal(shape=(2**32 + 1, 1))
+grads_sum_sq = nd.random_normal(shape=(2**32 + 1, 1))
+wds = nd.random_normal(shape=(2**32 + 1, 1))
+eta = .1
+eps = .9
+
+out = nd.multi_lars(lrs=lrs, weights_sum_sq=weights_sum_sq, 
grads_sum_sq=grads_sum_sq,
+wds=wds, eta=eta, eps=eps)
+
+assert out.shape[0] == 4294967297
 
 Review comment:
   Done!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-02-24 Thread GitBox
larroy edited a comment on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-590587794
 
 
   I don't have much bandwidth left, but if the change is this small I can 
finish the PR. Seems Linux GPU is timeouting often though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-02-24 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-590587794
 
 
   ok. Seems Linux GPU is timeouting often though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on issue #17623: Dynamic subgraph compile support

2020-02-24 Thread GitBox
samskalicky commented on issue #17623: Dynamic subgraph compile support
URL: https://github.com/apache/incubator-mxnet/pull/17623#issuecomment-590577917
 
 
   @sergeykolychev I had to update the optimize_for API call in the perl 
bindings. Can you please review/approve for the change here:
   
https://github.com/apache/incubator-mxnet/pull/17623/files#diff-cc128e9c950abe76c19162ce01fe2402


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (3f0b049 -> f9b2a63)

2020-02-24 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3f0b049  MXNet FFI for Operator Imperative Invocation (#17510)
 add f9b2a63  [Large Tensor] Fixed col2im op (#17622)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/im2col.h  |  4 ++--
 tests/nightly/test_large_array.py | 14 ++
 2 files changed, 16 insertions(+), 2 deletions(-)



[incubator-mxnet] branch master updated (3f0b049 -> f9b2a63)

2020-02-24 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3f0b049  MXNet FFI for Operator Imperative Invocation (#17510)
 add f9b2a63  [Large Tensor] Fixed col2im op (#17622)

No new revisions were added by this update.

Summary of changes:
 src/operator/nn/im2col.h  |  4 ++--
 tests/nightly/test_large_array.py | 14 ++
 2 files changed, 16 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] apeforest merged pull request #17622: [Large Tensor] Fixed col2im op

2020-02-24 Thread GitBox
apeforest merged pull request #17622: [Large Tensor] Fixed col2im op
URL: https://github.com/apache/incubator-mxnet/pull/17622
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-02-24 Thread GitBox
leezu commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-590564989
 
 
   @larroy why close this issues?
   
   You can just copy the suggested code change and push, then it can be merged:
   
   ``` 
 check_language(CUDA)
 if (NOT CMAKE_CUDA_COMPILER_LOADED AND UNIX AND EXISTS 
"/usr/local/cuda/bin/nvcc")
   set(CMAKE_CUDA_COMPILER "/usr/local/cuda/bin/nvcc")
   message(WARNING "CMAKE_CUDA_COMPILER guessed: ${CMAKE_CUDA_COMPILER}")
 endif()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy opened a new pull request #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-02-24 Thread GitBox
larroy opened a new pull request #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293
 
 
   ## Description ##
   
   After recent changes to CMake, CMAKE_CUDA_COMPILER is not picked up 
automatically, as nvcc is not usually on the PATH. This sets a reasonable 
default which is also used by convention by NVidia tooling which symlinks 
/usr/local/cuda to the default cuda version.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
connorgoggins commented on a change in pull request #17632: [Large Tensor] 
Fixed RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383532695
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter {
 };
 
 inline int GetRnnParamSize(int num_layer,
-   int input_size,
+   index_t input_size,
 
 Review comment:
   Good point, fixing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383532362
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -455,6 +455,21 @@ def npy_instance_norm(data, gamma, beta, axis, eps=1E-5):
 assert_almost_equal(out, out_nd.asnumpy(), forward_check_eps,
 forward_check_eps)
 
+def check_rnn():
+data = nd.random_normal(shape=(2**28, 4, 4))
+parameters = nd.random_normal(shape=(7,))
+state = nd.random_normal(shape=(1, 4, 1))
+mode = 'rnn_relu'
+state_size = 1
+num_layers = 1
+
+out = nd.RNN(data=data, parameters=parameters, state=state, mode=mode,
+ state_size=state_size, num_layers=num_layers)
+
+assert out.shape[0] == 268435456
 
 Review comment:
   Please use constants for constant values. Its good practise and you may need 
to re-use them in future tests too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383532029
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -361,9 +361,9 @@ void RNNBackward(DType* ws,
  DType* rs,
  const int num_layers,
  const int direction,
- const int seq_length,
- const int batch_size,
- const int input_size,
+ const index_t seq_length,
+ const index_t batch_size,
+ const index_t input_size,
 
 Review comment:
   Did u check that "seq_length, batch_size, input_size" are index_t in 
functions LstmBackwardTraining, GruBackwardTraining, VanillaRNNBackwardTraining 
? If so can you let me know here, else you may need to update them too 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383531766
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -278,9 +278,9 @@ void RNNForwardTraining(DType* ws,
 bool state_outputs,
 const int num_layers,
 const int direction,
-const int seq_length,
-const int batch_size,
-const int input_size,
+const index_t seq_length,
+const index_t batch_size,
+const index_t input_size,
 
 Review comment:
   Did u check that "seq_length, batch_size, input_size" are index_t in 
functions LstmForwardTraining, GruForwardTraining, VanillaRNNForwardTraining ? 
If so can you let me know here, else you may need to update them too 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383529334
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter {
 };
 
 inline int GetRnnParamSize(int num_layer,
-   int input_size,
+   index_t input_size,
 
 Review comment:
   Are you sure you don't have to change size1, size2, proj_size and param_size:
   
   
https://github.com/apache/incubator-mxnet/pull/17632/files#diff-6dfdca409e69cc495f286170fe1e553eR143
   -
   
https://github.com/apache/incubator-mxnet/pull/17632/files#diff-6dfdca409e69cc495f286170fe1e553eR152
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383529921
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -182,8 +182,8 @@ inline int GetRnnBiasSize(int num_layer,
  *  - output -> h[t](, c[t] additionally with Lstm) time by time(sz: NxH(x2))
  *  - intermediate y[1...T] as next layer's inputs(sz: TxNxHxD)
  */
-inline size_t GetRNNWorkspaceSize(int seq_length,
-  int batch_size,
+inline size_t GetRNNWorkspaceSize(index_t seq_length,
+  index_t batch_size,
 
 Review comment:
   Can batch_size be -ve ? @apeforest what do you think ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed 
RNN op
URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r383529334
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter {
 };
 
 inline int GetRnnParamSize(int num_layer,
-   int input_size,
+   index_t input_size,
 
 Review comment:
   Are you sure you don't have to change size1, size2, proj_size and param_size:
   
https://github.com/apache/incubator-mxnet/pull/17632/files#diff-6dfdca409e69cc495f286170fe1e553eR143
   -
   
https://github.com/apache/incubator-mxnet/pull/17632/files#diff-6dfdca409e69cc495f286170fe1e553eR152
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17675: [Large Tensor] Fix multi_lars op

2020-02-24 Thread GitBox
access2rohit commented on a change in pull request #17675: [Large Tensor] Fix 
multi_lars op
URL: https://github.com/apache/incubator-mxnet/pull/17675#discussion_r383528199
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -455,6 +455,20 @@ def npy_instance_norm(data, gamma, beta, axis, eps=1E-5):
 assert_almost_equal(out, out_nd.asnumpy(), forward_check_eps,
 forward_check_eps)
 
+def check_multi_lars():
+lrs = nd.random_normal(shape=(2**32 + 1, 1))
+weights_sum_sq = nd.random_normal(shape=(2**32 + 1, 1))
+grads_sum_sq = nd.random_normal(shape=(2**32 + 1, 1))
+wds = nd.random_normal(shape=(2**32 + 1, 1))
+eta = .1
+eps = .9
+
+out = nd.multi_lars(lrs=lrs, weights_sum_sq=weights_sum_sq, 
grads_sum_sq=grads_sum_sq,
+wds=wds, eta=eta, eps=eps)
+
+assert out.shape[0] == 4294967297
 
 Review comment:
   Can you make this a constant and replace in the code ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on issue #17675: [Large Tensor] Fix multi_lars op

2020-02-24 Thread GitBox
connorgoggins commented on issue #17675: [Large Tensor] Fix multi_lars op
URL: https://github.com/apache/incubator-mxnet/pull/17675#issuecomment-590556597
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins opened a new pull request #17675: [Large Tensor] Fix multi_lars op

2020-02-24 Thread GitBox
connorgoggins opened a new pull request #17675: [Large Tensor] Fix multi_lars op
URL: https://github.com/apache/incubator-mxnet/pull/17675
 
 
   ## Description ##
   The multi_lars op was previously breaking on large tensor (dimension >= 
2^32) data. With the following input:
   ```
   run_performance_test(nd.multi_lars, inputs=[{"lrs": 
nd.random_normal(shape=(2**32 + 1, 1)), "weights_sum_sq": 
nd.random_normal(shape=(2**32 + 1, 1)), "grads_sum_sq": 
nd.random_normal(shape=(2**32 + 1, 1)), "wds": nd.random_normal(shape=(2**32 + 
1, 1)), "eta": .1, "eps": .9}], run_backward=False, warmup=1, runs=1)
   ```
   the following error was thrown:
   ```
   Segmentation fault: 11
   ```
   
   To root cause this issue, I ran the previous command in a Python script with 
GDB, and found that the underlying problem was in the data type of the 
`MultiLARS` kernel in `multi_lars-inl.h.h`. The index variable `i` used the 
`int` dtype when it should have been using `index_t` to properly handle long 
int indices. I switched this variable to `index_t` in the struct header and, 
after rebuilding, the previous input command displayed the correct output:
   ```
   INFO:root:Begin Benchmark - multi_lars
   INFO:root:Complete Benchmark - multi_lars
   [{'multi_lars': [{'inputs': {'lrs': '', 
'weights_sum_sq': '', 'grads_sum_sq': '', 'wds': '', 'eta': 0.1, 
'eps': 0.9}, 'max_storage_mem_alloc_cpu/0': 16106127.0, 
'avg_time_forward_multi_lars': 4904.6128}]}]
   ```
   
   To ensure completeness and to prevent future breaking changes, I also added 
a nightly test for the multi_lars op with large tensor data in 
`tests/nightly/test_large_array.py`.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage
   - [x] Code is well-documented
   - [x] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - M src/operator/contrib/multi_lars-inl.h
   - M tests/nightly/test_large_array.py
   
   ## Comments ##
   Tested on r5dn.24xl-ubuntu 16.04 and p2.16xl-ubuntu 16.04 with
   1. Individual op run
   2. Full OpPerf run
   
   ## Results ##
   The key difference between CPU and GPU tests was the instance type 
(r5dn.24xl for CPU, p2.16xl for GPU). All relevant build flags remain the same, 
and both were tested using CPU context.
   
   [Single operator test - multi_lars op (GPU)]() - pending
   [Single operator test - multi_lars op 
(CPU)](https://gist.github.com/connorgoggins/c4c3c672a25d6845b01cd1a5f5e15d5f)
   
   [Full OpPerf test (GPU)]() - pending
   [Full OpPerf test (CPU)]() - pending
   
   @apeforest @access2rohit @ChaiBapchya 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy closed pull request #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-02-24 Thread GitBox
larroy closed pull request #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (a6ab49f -> 3f0b049)

2020-02-24 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a6ab49f  Implement np.random.pareto backward (#17607)
 add 3f0b049  MXNet FFI for Operator Imperative Invocation (#17510)

No new revisions were added by this update.

Summary of changes:
 ci/jenkins/Jenkins_steps.groovy|   10 +-
 include/mxnet/api_registry.h   |   48 +
 include/mxnet/expr_operator.h  |   58 +
 include/mxnet/ir/expr.h|  225 
 include/mxnet/node/container.h |  334 ++
 include/mxnet/node/node.h  |   63 +
 include/mxnet/runtime/c_runtime_api.h  |  165 +++
 include/mxnet/runtime/container.h  |  282 +
 include/mxnet/runtime/data_type.h  |  217 
 include/mxnet/runtime/ffi_helper.h |  131 +++
 include/mxnet/runtime/memory.h |  215 
 include/mxnet/runtime/ndarray.h|   45 +
 include/mxnet/runtime/object.h |  823 ++
 include/mxnet/runtime/packed_func.h| 1201 
 include/mxnet/runtime/registry.h   |  314 +
 include/mxnet/tuple.h  |   27 +
 python/mxnet/__init__.py   |5 +
 .../mxnet/{numpy/_register.py => _api_internal.py} |   14 +-
 python/mxnet/_ctypes/ndarray.py|   23 +-
 .../mxnet/{numpy/_register.py => _ffi/__init__.py} |   13 +-
 .../_register.py => _ffi/_ctypes/__init__.py}  |   14 +-
 python/mxnet/_ffi/_ctypes/function.py  |  120 ++
 python/mxnet/_ffi/_ctypes/object.py|   53 +
 python/mxnet/_ffi/_ctypes/types.py |   58 +
 .../{numpy/_register.py => _ffi/_cy3/__init__.py}  |   13 +-
 python/mxnet/_ffi/_cython/base.pxi |  103 ++
 python/mxnet/_ffi/_cython/convert.pxi  |   75 ++
 .../{numpy/_register.py => _ffi/_cython/core.pyx}  |   13 +-
 python/mxnet/_ffi/_cython/function.pxi |  163 +++
 .../_register.py => _ffi/_cython/ndarray.pxi}  |   12 +-
 python/mxnet/_ffi/base.py  |   86 ++
 python/mxnet/_ffi/function.py  |  162 +++
 python/mxnet/_ffi/node_generic.py  |   79 ++
 .../mxnet/{numpy/_register.py => _ffi/object.py}   |   16 +-
 .../{numpy/_register.py => _ffi/runtime_ctypes.py} |   17 +-
 .../mxnet/{numpy/_register.py => _global_var.py}   |   14 +-
 python/mxnet/{numpy/_register.py => api.py}|   12 +-
 .../{numpy/_register.py => cython/__init__.py} |   11 +-
 python/mxnet/cython/ndarray.pyx|   23 +-
 python/mxnet/ndarray/_internal.py  |9 +-
 .../numpy/_api_internal.py}|   10 +-
 python/mxnet/ndarray/numpy/_op.py  |   30 +-
 python/mxnet/numpy/_register.py|1 -
 python/mxnet/numpy/multiarray.py   |2 +-
 python/setup.py|   14 +
 src/api/_api_internal/_api_internal.cc |   64 ++
 src/api/operator/numpy/np_init_op.cc   |   55 +
 src/api/operator/numpy/np_tensordot_op.cc  |   76 ++
 src/api/operator/utils.cc  |   69 ++
 src/api/operator/utils.h   |   73 ++
 src/ir/expr.cc |   53 +
 src/lang/expr.cc   |   32 +
 src/lang/ir.cc |   33 +
 src/operator/numpy/np_tensordot_op-inl.h   |   13 +
 src/operator/tensor/init_op.h  |9 +
 src/runtime/c_runtime_api.cc   |   84 ++
 src/runtime/object.cc  |  215 
 src/runtime/object_internal.h  |   53 +
 src/runtime/registry.cc|  145 +++
 59 files changed, 6138 insertions(+), 159 deletions(-)
 create mode 100644 include/mxnet/api_registry.h
 create mode 100644 include/mxnet/expr_operator.h
 create mode 100644 include/mxnet/ir/expr.h
 create mode 100644 include/mxnet/node/container.h
 create mode 100644 include/mxnet/node/node.h
 create mode 100644 include/mxnet/runtime/c_runtime_api.h
 create mode 100644 include/mxnet/runtime/container.h
 create mode 100644 include/mxnet/runtime/data_type.h
 create mode 100644 include/mxnet/runtime/ffi_helper.h
 create mode 100644 include/mxnet/runtime/memory.h
 create mode 100644 include/mxnet/runtime/ndarray.h
 create mode 100644 include/mxnet/runtime/object.h
 create mode 100644 include/mxnet/runtime/packed_func.h
 create mode 100644 include/mxnet/runtime/registry.h
 copy python/mxnet/{numpy/_register.py => _api_internal.py} (67%)
 copy python/mxnet/{numpy/_register.py => _ffi/__init__.py} (71%)
 copy 

[incubator-mxnet] branch master updated (a6ab49f -> 3f0b049)

2020-02-24 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a6ab49f  Implement np.random.pareto backward (#17607)
 add 3f0b049  MXNet FFI for Operator Imperative Invocation (#17510)

No new revisions were added by this update.

Summary of changes:
 ci/jenkins/Jenkins_steps.groovy|   10 +-
 include/mxnet/api_registry.h   |   48 +
 include/mxnet/expr_operator.h  |   58 +
 include/mxnet/ir/expr.h|  225 
 include/mxnet/node/container.h |  334 ++
 include/mxnet/node/node.h  |   63 +
 include/mxnet/runtime/c_runtime_api.h  |  165 +++
 include/mxnet/runtime/container.h  |  282 +
 include/mxnet/runtime/data_type.h  |  217 
 include/mxnet/runtime/ffi_helper.h |  131 +++
 include/mxnet/runtime/memory.h |  215 
 include/mxnet/runtime/ndarray.h|   45 +
 include/mxnet/runtime/object.h |  823 ++
 include/mxnet/runtime/packed_func.h| 1201 
 include/mxnet/runtime/registry.h   |  314 +
 include/mxnet/tuple.h  |   27 +
 python/mxnet/__init__.py   |5 +
 .../mxnet/{numpy/_register.py => _api_internal.py} |   14 +-
 python/mxnet/_ctypes/ndarray.py|   23 +-
 .../mxnet/{numpy/_register.py => _ffi/__init__.py} |   13 +-
 .../_register.py => _ffi/_ctypes/__init__.py}  |   14 +-
 python/mxnet/_ffi/_ctypes/function.py  |  120 ++
 python/mxnet/_ffi/_ctypes/object.py|   53 +
 python/mxnet/_ffi/_ctypes/types.py |   58 +
 .../{numpy/_register.py => _ffi/_cy3/__init__.py}  |   13 +-
 python/mxnet/_ffi/_cython/base.pxi |  103 ++
 python/mxnet/_ffi/_cython/convert.pxi  |   75 ++
 .../{numpy/_register.py => _ffi/_cython/core.pyx}  |   13 +-
 python/mxnet/_ffi/_cython/function.pxi |  163 +++
 .../_register.py => _ffi/_cython/ndarray.pxi}  |   12 +-
 python/mxnet/_ffi/base.py  |   86 ++
 python/mxnet/_ffi/function.py  |  162 +++
 python/mxnet/_ffi/node_generic.py  |   79 ++
 .../mxnet/{numpy/_register.py => _ffi/object.py}   |   16 +-
 .../{numpy/_register.py => _ffi/runtime_ctypes.py} |   17 +-
 .../mxnet/{numpy/_register.py => _global_var.py}   |   14 +-
 python/mxnet/{numpy/_register.py => api.py}|   12 +-
 .../{numpy/_register.py => cython/__init__.py} |   11 +-
 python/mxnet/cython/ndarray.pyx|   23 +-
 python/mxnet/ndarray/_internal.py  |9 +-
 .../numpy/_api_internal.py}|   10 +-
 python/mxnet/ndarray/numpy/_op.py  |   30 +-
 python/mxnet/numpy/_register.py|1 -
 python/mxnet/numpy/multiarray.py   |2 +-
 python/setup.py|   14 +
 src/api/_api_internal/_api_internal.cc |   64 ++
 src/api/operator/numpy/np_init_op.cc   |   55 +
 src/api/operator/numpy/np_tensordot_op.cc  |   76 ++
 src/api/operator/utils.cc  |   69 ++
 src/api/operator/utils.h   |   73 ++
 src/ir/expr.cc |   53 +
 src/lang/expr.cc   |   32 +
 src/lang/ir.cc |   33 +
 src/operator/numpy/np_tensordot_op-inl.h   |   13 +
 src/operator/tensor/init_op.h  |9 +
 src/runtime/c_runtime_api.cc   |   84 ++
 src/runtime/object.cc  |  215 
 src/runtime/object_internal.h  |   53 +
 src/runtime/registry.cc|  145 +++
 59 files changed, 6138 insertions(+), 159 deletions(-)
 create mode 100644 include/mxnet/api_registry.h
 create mode 100644 include/mxnet/expr_operator.h
 create mode 100644 include/mxnet/ir/expr.h
 create mode 100644 include/mxnet/node/container.h
 create mode 100644 include/mxnet/node/node.h
 create mode 100644 include/mxnet/runtime/c_runtime_api.h
 create mode 100644 include/mxnet/runtime/container.h
 create mode 100644 include/mxnet/runtime/data_type.h
 create mode 100644 include/mxnet/runtime/ffi_helper.h
 create mode 100644 include/mxnet/runtime/memory.h
 create mode 100644 include/mxnet/runtime/ndarray.h
 create mode 100644 include/mxnet/runtime/object.h
 create mode 100644 include/mxnet/runtime/packed_func.h
 create mode 100644 include/mxnet/runtime/registry.h
 copy python/mxnet/{numpy/_register.py => _api_internal.py} (67%)
 copy python/mxnet/{numpy/_register.py => _ffi/__init__.py} (71%)
 copy 

[GitHub] [incubator-mxnet] reminisce merged pull request #17510: MXNet FFI for Operator Imperative Invocation

2020-02-24 Thread GitBox
reminisce merged pull request #17510: MXNet FFI for Operator Imperative 
Invocation
URL: https://github.com/apache/incubator-mxnet/pull/17510
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] HahTK commented on issue #17623: Dynamic subgraph compile support

2020-02-24 Thread GitBox
HahTK commented on issue #17623: Dynamic subgraph compile support
URL: https://github.com/apache/incubator-mxnet/pull/17623#issuecomment-590531101
 
 
   @samskalicky :
   I'll let the reviewers go through the code in detail but from perusal and 
with the aux added the feature looks good.
   
   Things to think about in the future :
   1. For backends with very long compile time, a weight_update function may be 
useful.
   It also serves as a path to dynamically swap models (if so desired).
   
   2. Some backends may absorb the weights in to the bin itself.
   For those we should provide an option for the partitioner to remove the 
redundant weight nodes that have been fully absorbed into bins. 
   The corner case would of course be weights that are only partly absorbed.
   This would reduce memory footprint and improve speed by avoid the need to 
shove around unused weights
   
   I think both of these items can be done in the future but just enumerating 
them here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590527708
 
 
   Another thing to note is that ompenmp with llvm also installs libgomp.so as 
a symlink to libomp.so, so there's a good chance that libomp.so will be loaded 
no matter what, depending upon whether a system had clang/openmp installed at 
all and where that is in the link order.  So unless I am missing some clever 
logic, what mkl doing with its dynamic loading is a cause for concern.
   
   Also of note, clang seems to also put a symlink to libiomp5 (in addition to 
libgomp):
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]ls -l /usr/local/lib/lib*omp*.so*
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libgomp.so -> 
libomp.so
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libiomp5.so -> 
libomp.so
   -rw-r--r-- 1 root root 953376 Feb 20 14:36 /usr/local/lib/libomp.so
   -rw-r--r-- 1 root root  66072 Feb 20 14:36 /usr/local/lib/libomptarget.so
   ```
   So it seems link order is important?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590527708
 
 
   Another thing to note is that ompenmp with llvm also installs libgomp.so as 
a symlink to libomp.so, so there's a good chance that libomp.so will be loaded 
no matter what, depending upon whether a system had clang/openmp installed at 
all and where that is in the link order.  So unless I am missing some clever 
logic, what mkl doing with its dynamic loading is a cause for concern.
   
   Also of note, clang seems to also put a symlink to libiomp5:
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]ls -l /usr/local/lib/lib*omp*.so*
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libgomp.so -> 
libomp.so
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libiomp5.so -> 
libomp.so
   -rw-r--r-- 1 root root 953376 Feb 20 14:36 /usr/local/lib/libomp.so
   -rw-r--r-- 1 root root  66072 Feb 20 14:36 /usr/local/lib/libomptarget.so
   ```
   So it seems link order is important?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590527708
 
 
   Another thing to note is that ompenmp with llvm also installs libgomp.so as 
a symlink to libomp.so, so there's a good chance that libomp.so will be loaded 
no matter what, depending upon whether a system had clang/openmp installed at 
all and where that is in the link order.  So unless I am missing some clever 
logic, what mkl doing with its dynamic loading is a cause for concern.
   
   Also of note, clang seems to also put a symlink to libiomp5:
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]ls -l /usr/local/lib/lib*omp*.so*
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libgomp.so -> 
libomp.so
   lrwxrwxrwx 1 root root  9 Feb 20 14:37 /usr/local/lib/libiomp5.so -> 
libomp.so
   -rw-r--r-- 1 root root 953376 Feb 20 14:36 /usr/local/lib/libomp.so
   -rw-r--r-- 1 root root  66072 Feb 20 14:36 /usr/local/lib/libomptarget.so
   ```
   SO it seems link order is important?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
leezu commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590523257
 
 
   > However, even if I remove the openmp build in CMakeLists.txt and build 
with clang, I get that warning, since it pulls in libomp from clang (I am using 
clang8):
   
   When building with MKL, are we supposed to build with `libiomp` as well 
@cjolivier01 @pengzhao-intel?
   If so, we need to change our build accordingly.
   If not, this seems to be a bug in MKL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590521574
 
 
   I can reproduce now.  However, even if I remove the openmp build in 
CMakeLists.txt and build with clang, I get that warning, since it pulls in 
libomp from clang (I am using clang8):
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]PYTHONPATH=$(pwd)/python python3 
test.py 
   OMP: Error #15: Initializing libiomp5.so, but found libomp.so already 
initialized.
   OMP: Hint This means that multiple copies of the OpenMP runtime have been 
linked into the program. That is dangerous, since it can degrade performance or 
cause incorrect results. The best thing to do is to ensure that only a single 
OpenMP runtime is linked into the process, e.g. by avoiding static linking of 
the OpenMP runtime in any library. As an unsafe, unsupported, undocumented 
workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to 
allow the program to continue to execute, but that may cause crashes or 
silently produce incorrect results. For more information, please see 
http://www.intel.com/software/products/support/.
   Aborted (core dumped)
   ```
   ```
   [chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```
   
   so this seems like a systemic problem with mkl to me.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590521574
 
 
   I can reproduce now.  However, even if I remove the openmp build in 
CMakeLists.txt and build with clang, I get that warning, since it pulls in 
libomp from clang (I am using clang8):
   ```bash
   [chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]PYTHONPATH=$(pwd)/python python3 
test.py 
   OMP: Error #15: Initializing libiomp5.so, but found libomp.so already 
initialized.
   OMP: Hint This means that multiple copies of the OpenMP runtime have been 
linked into the program. That is dangerous, since it can degrade performance or 
cause incorrect results. The best thing to do is to ensure that only a single 
OpenMP runtime is linked into the process, e.g. by avoiding static linking of 
the OpenMP runtime in any library. As an unsafe, unsupported, undocumented 
workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to 
allow the program to continue to execute, but that may cause crashes or 
silently produce incorrect results. For more information, please see 
http://www.intel.com/software/products/support/.
   Aborted (core dumped)
   ```
   ```[chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```
   
   so this seems like a systemic problem with mkl to me.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 edited a comment on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 edited a comment on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590521574
 
 
   I can reproduce now.  However, even if I remove the openmp build in 
CMakeLists.txt and build with clang, I get that warning, since it pulls in 
libomp from clang (I am using clang8):
   ```bash
   [chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```
   ```bash
   [chriso@chriso-ripper:~/src/mxnet (master)]PYTHONPATH=$(pwd)/python python3 
test.py 
   OMP: Error #15: Initializing libiomp5.so, but found libomp.so already 
initialized.
   OMP: Hint This means that multiple copies of the OpenMP runtime have been 
linked into the program. That is dangerous, since it can degrade performance or 
cause incorrect results. The best thing to do is to ensure that only a single 
OpenMP runtime is linked into the process, e.g. by avoiding static linking of 
the OpenMP runtime in any library. As an unsafe, unsupported, undocumented 
workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to 
allow the program to continue to execute, but that may cause crashes or 
silently produce incorrect results. For more information, please see 
http://www.intel.com/software/products/support/.
   Aborted (core dumped)
   ```
   ```[chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cjolivier01 commented on issue #17641: OpenMP Error

2020-02-24 Thread GitBox
cjolivier01 commented on issue #17641: OpenMP Error
URL: 
https://github.com/apache/incubator-mxnet/issues/17641#issuecomment-590521574
 
 
   I can reproduce now.  However, even if I remove the openmp build in 
CMakeLists.txt and build with clang, I get that warning, since it pulls in 
libomp from clang (I am using clang8):
   
   [chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   [chriso@chriso-ripper:~/src/mxnet/build (master)]cd ..
   ```
   [chriso@chriso-ripper:~/src/mxnet (master)]PYTHONPATH=$(pwd)/python python3 
test.py 
   OMP: Error #15: Initializing libiomp5.so, but found libomp.so already 
initialized.
   OMP: Hint This means that multiple copies of the OpenMP runtime have been 
linked into the program. That is dangerous, since it can degrade performance or 
cause incorrect results. The best thing to do is to ensure that only a single 
OpenMP runtime is linked into the process, e.g. by avoiding static linking of 
the OpenMP runtime in any library. As an unsafe, unsupported, undocumented 
workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to 
allow the program to continue to execute, but that may cause crashes or 
silently produce incorrect results. For more information, please see 
http://www.intel.com/software/products/support/.
   Aborted (core dumped)
   ```
   ```[chriso@chriso-ripper:~/src/mxnet/build (master)]ldd libmxnet.so 
   linux-vdso.so.1 (0x7ffd55ab4000)
   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f084c8cd000)
   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f084c6ae000)
   libmkl_rt.so => /opt/intel/mkl/lib/intel64/libmkl_rt.so 
(0x7f084bfce000)
   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f084bdc6000)
   liblapack.so.3 => /usr/lib/x86_64-linux-gnu/liblapack.so.3 
(0x7f084b54)
   libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7f084b1b7000)
   libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7f084ae19000)
   libomp.so => /usr/local/lib/libomp.so (0x7f084ab56000)
   libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 
(0x7f084a93e000)
   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f084a54d000)
   /lib64/ld-linux-x86-64.so.2 (0x7f0855365000)
   libopenblas.so.0 => /usr/lib/x86_64-linux-gnu/libopenblas.so.0 
(0x7f08482a7000)
   libgfortran.so.4 => /usr/lib/x86_64-linux-gnu/libgfortran.so.4 
(0x7f0847ec8000)
   libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x7f0847c88000)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #17587: Need help: how to manually build whl

2020-02-24 Thread GitBox
stu1130 edited a comment on issue #17587: Need help: how to manually build whl
URL: 
https://github.com/apache/incubator-mxnet/issues/17587#issuecomment-590514391
 
 
   @ChaiBapchya could you take a look at this question?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #17587: Need help: how to manually build whl

2020-02-24 Thread GitBox
stu1130 commented on issue #17587: Need help: how to manually build whl
URL: 
https://github.com/apache/incubator-mxnet/issues/17587#issuecomment-590514391
 
 
   @ChaiBapchya could you take a look at this quesiton?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #17587: Need help: how to manually build whl

2020-02-24 Thread GitBox
stu1130 edited a comment on issue #17587: Need help: how to manually build whl
URL: 
https://github.com/apache/incubator-mxnet/issues/17587#issuecomment-590510489
 
 
   [@mxnet-label-bot add Question]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #17664: Question : Bounded delay

2020-02-24 Thread GitBox
stu1130 commented on issue #17664: Question : Bounded delay
URL: 
https://github.com/apache/incubator-mxnet/issues/17664#issuecomment-590513564
 
 
   @eric-haibin-lin could you take a look at this question?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #17539: implement a c++ operator using other outside functions in 'so' file

2020-02-24 Thread GitBox
stu1130 commented on issue #17539: implement a c++ operator using other outside 
functions in 'so' file
URL: 
https://github.com/apache/incubator-mxnet/issues/17539#issuecomment-590511665
 
 
   [@mxnet-label-bot add custom]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >