[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191665190
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   There is no separate liblapack for openblas as far as I understood


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191665190
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   There is no separate liblapack for openblas as far as I understood. Is there 
for other libraries other than OpenBLAS? As far as I can tell CI checks only 
for OpenBLAS.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on a change in pull request #11095: [MKLDNN] reorder the mem format for the AddBack mode in case src & dst is different

2018-05-30 Thread GitBox
pengzhao-intel commented on a change in pull request #11095: [MKLDNN] reorder 
the mem format for the AddBack mode in case src & dst is different
URL: https://github.com/apache/incubator-mxnet/pull/11095#discussion_r191668195
 
 

 ##
 File path: src/operator/nn/mkldnn/mkldnn_base.cc
 ##
 @@ -128,8 +128,14 @@ void CommitOutput(const NDArray &arr, const 
mkldnn_output_t &res) {
   if (res.first == CopyBack) {
 const_cast(arr).CopyFrom(*res.second);
   } else if (res.first == AddBack) {
-auto mem = arr.GetMKLDNNData(res.second->get_primitive_desc());
-CHECK(mem != nullptr);
 
 Review comment:
   I think it can work but it will be a little confusion to call reorder API 
for all cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #11095: [MKLDNN] reorder the mem format for the AddBack mode in case src & dst is different

2018-05-30 Thread GitBox
pengzhao-intel commented on issue #11095: [MKLDNN] reorder the mem format for 
the AddBack mode in case src & dst is different
URL: https://github.com/apache/incubator-mxnet/pull/11095#issuecomment-393062424
 
 
   The crash happens in the backward of convolution when updating the weights. 
   
   It's not easy to create the python level test case w/ the different memory 
format of the ndarray.

   Maybe we can add a CPP case along with @juliusshufan 's CPP cases.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 
wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r191669800
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   @szha I don't get you point assuming you meant [position-independent code 
(PIC)](https://gcc.gnu.org/onlinedocs/gcc/Code-Gen-Options.html#Code-Gen-Options)
 flag. It is important to understand that libgfortran comes from the cross 
compilation lib folder `/usr/lib/arm-linux-gnueabihf/libgfortran.so.3.0.0` and 
it's not a host library.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
larroy commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191685343
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   I don't get your comment, I tried to fix the logic as I understood the 
previous logic was working around adding these flags in the windows case and 
breaking other platforms. Can you elaborate?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg opened a new pull request #11096: [MXNET-472] ccache for docker builds

2018-05-30 Thread GitBox
lebeg opened a new pull request #11096: [MXNET-472]  ccache for docker builds
URL: https://github.com/apache/incubator-mxnet/pull/11096
 
 
   ## Description ##
   
   This is a work on top of [Perdo's 
PR](https://github.com/apache/incubator-mxnet/pull/11036), but has a few 
improvements.
   
   ## Checklist ##
   ### Essentials ###
   - [x] The PR title starts with 
[MXNET-472](https://issues.apache.org/jira/projects/MXNET/issues/MXNET-472)
   - [x] Changes are complete
   
   ### Changes ###
   - [x] CCache latest version and build for all different docker builds
   - [x] CPU and GPU builds are supported
   
   ## Comments ##
   
   The ccache dir is meant to be shared via EFS on instances with the same 
label on CI builds.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191697700
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   `list(APPEND mxnet_LINKER_LIBS lapack)` line adds liblapack to required 
linker libraries. But there is no such think as liblapack for OpenBLAS, at 
least not for our builds. The CI passed in this case because with this change 
`-DUSE_LAPACK=OFF` becomes effective.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191698058
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   And there is a chance that liblapack exists for other blas libraries 
(atlas?) but we are not testing them in CI currently


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191698810
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   With your change you need to set `-DUSE_LAPACK=ON` in arm builds not to 
loose lapack functionality.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asmushetzel commented on issue #11025: added ravel/unravel operators

2018-05-30 Thread GitBox
asmushetzel commented on issue #11025: added ravel/unravel operators
URL: https://github.com/apache/incubator-mxnet/pull/11025#issuecomment-393112936
 
 
   Changed input names from "A" to "data" as requested. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZiyueHuang opened a new pull request #11097: fix nd.backward when out_grad is None

2018-05-30 Thread GitBox
ZiyueHuang opened a new pull request #11097: fix nd.backward when out_grad is 
None
URL: https://github.com/apache/incubator-mxnet/pull/11097
 
 
   ## Description ##
   Otherwise `ograd_handles` here 
https://github.com/apache/incubator-mxnet/blob/master/src/c_api/c_api_ndarray.cc#L332,
 never becomes `nullptr`. And this part is wrong 
https://github.com/apache/incubator-mxnet/blob/master/src/c_api/c_api_ndarray.cc#L350-L356.
 
   
   cc @szha @eric-haibin-lin 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-30 Thread GitBox
marcoabreu commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-393136237
 
 
   Well you misconfigured the job ;) You assigned a restricted job to an 
unrestricted slave. I have corrected it for you


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Nightly build

2018-05-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a8b598a  Nightly build
a8b598a is described below

commit a8b598a96f7caddc50105d4542a2f26b20cde0d1
Author: mxnet-ci 
AuthorDate: Wed May 30 12:06:07 2018 +

Nightly build
---
 date.txt | 1 -
 1 file changed, 1 deletion(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index b5a7152..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Wed May 30 02:12:45 UTC 2018

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] jinhuang415 commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-05-30 Thread GitBox
jinhuang415 commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r191752075
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -381,13 +381,12 @@ unittest_ubuntu_python3_cpu() {
 
 unittest_ubuntu_python3_cpu_mkldnn() {
 set -ex
-export PYTHONPATH=./python/ 
+export PYTHONPATH=./python/
 # MXNET_MKLDNN_DEBUG is buggy and produces false positives
 # https://github.com/apache/incubator-mxnet/issues/10026
 #export MXNET_MKLDNN_DEBUG=1  # Ignored if not present
 export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 nosetests-3.4 --verbose tests/python/unittest
-nosetests-3.4 --verbose tests/python/quantization
 
 Review comment:
   One note for removing "nosetests-3.4 --verbose tests/python/quantization" 
for unittest_ubuntu_python3_cpu_mkldnn: this is due to we used 
tests/python/mkl/test_quantization_mkldnn.py (will be tested when running 
"nosetests-3.4 --verbose tests/python/mkl") to test mkldnn quantization so 
could remove tests/python/quantization path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-05-30 Thread GitBox
jinhuang415 commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r191751428
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -382,7 +381,6 @@ unittest_ubuntu_python3_cpu() {
 #export MXNET_MKLDNN_DEBUG=1  # Ignored if not present
 export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
 nosetests-3.4 --verbose tests/python/unittest
-nosetests-3.4 --verbose tests/python/quantization
 
 Review comment:
   @zhang-da @reminisce The difficulty is we need a way to separate the naive 
CPU quantization and MKLDNN quantization test if we want to share the same 
test_quantization.py (it's our aim to share code as @reminisce mentioned) since 
they are both use CPU context, I figured out a way to set a environment 
variable USE_MKLDNN in test_quantization_mkldnn.py and check this in 
test_quantization.py, for naive CPU path USE_MKLDNN will not be set so will go 
to naive path. We have added quantization test for CPU path back in new diff. 
Please help to review and check if there is any better way to do this. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9765: Installation with GPU on Fedora 27?

2018-05-30 Thread GitBox
chinakook commented on issue #9765: Installation with GPU on Fedora 27?
URL: 
https://github.com/apache/incubator-mxnet/issues/9765#issuecomment-393166514
 
 
   Ubuntu 18.04/Gcc7/cuda9.2 also has this problem when using make, but it’s no 
problem with cmake.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
chinakook commented on issue #11094: [MXNET-115] USE_LAPACK is forced on all 
platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#issuecomment-393168668
 
 
   Mkl also has lapack95, but it’s not supported by mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10789: Reshape of input array when the shape is available only at runtime is not possible

2018-05-30 Thread GitBox
chinakook commented on issue #10789: Reshape of input array when the shape is 
available only at runtime is not possible 
URL: 
https://github.com/apache/incubator-mxnet/issues/10789#issuecomment-393185133
 
 
   To get the shape of symbol is very important in some condition. It can be 
done as following:
   x=mx.sym.var(‘data’, shape=(1,3,224,224))
   y=resnet50(x)
   _, yshape, _ = y.infer_shape_partial()
   
   However, It’s difficult to get shape of a tensor or to define dims of 
weights according to shape of a tensor in HybridBlock in gluon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
chinakook commented on issue #10889: [MXNET-382] Shape and Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#issuecomment-393185610
 
 
   If we have a symbol A with 4 dims, and how can I get the 1-dim size of 
shape_nd(A) result?
   It’s straightforward in Keras and Tensorflow using K.shape(A)[1] and 
A.shape[1].
   I think it’s difficult to do like this in MXNet’s Symbol system.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #11086: Infer shape error with softmaxoutput

2018-05-30 Thread GitBox
chinakook commented on issue #11086: Infer shape error with softmaxoutput
URL: 
https://github.com/apache/incubator-mxnet/issues/11086#issuecomment-393190465
 
 
   Flattern is just OK. It’s not bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #11091: Symbolic .json file not compatible with .params file generated since MXNet 1.2

2018-05-30 Thread GitBox
chinakook commented on issue #11091: Symbolic .json file not compatible with 
.params file generated since MXNet 1.2
URL: 
https://github.com/apache/incubator-mxnet/issues/11091#issuecomment-393192105
 
 
   Using a module to save_checkpoint or gluon’s export would be OK. Getting 
mixed with Gluon’s save and Symbol’s save is so bad.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #11086: Infer shape error with softmaxoutput

2018-05-30 Thread GitBox
xinyu-intel commented on issue #11086: Infer shape error with softmaxoutput
URL: 
https://github.com/apache/incubator-mxnet/issues/11086#issuecomment-393192905
 
 
   Yes, you are right. By the way, is it possible to automatically perform 
flatten before softmax when the softmax input is such (N, C, 1, 1) because some 
users can identify this dim to be (N, C) and forget to add flatten before that. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #11062: how to manually occupy all gpu memory like tensorflow?

2018-05-30 Thread GitBox
chinakook commented on issue #11062: how to manually occupy all gpu memory like 
tensorflow?
URL: 
https://github.com/apache/incubator-mxnet/issues/11062#issuecomment-393196433
 
 
   It’s better to do like this when you have a Tesla card to prevent other 
process using the card.
   However, Your card may be A Titan Xp with 12GB and Firefox or chrome also 
need to use it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model quantization

2018-05-30 Thread GitBox
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model 
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393196924
 
 
   @reminisce @zheng-da  We have resolved all the comments, would you help to 
check if you have further comments on the change?
   @marcoabreu @zheng-da We see a lot of Jenkins failure recently after 
submitting new change and most of the failure happens at CPP:GPU Unittest (see 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10433/51/pipeline/726/),
 we tried on our local GPU and everything is fine, and we tried to re-trigger 
the Jenkins and it can pass sometimes (not stable, sometimes need to re-trigger 
several times to pass Jenkins), I think our change should not impact CPP:GPU 
testing, would you help to check if this is an issue for Jenkins system? Or is 
there any way to debug the failure issue on Jenkins? Thanks. I copied the 
failure log as below for your reference:
   
   ```
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:133: 
Stopping: NaiveEngine
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:135: 
Stopped: NaiveEngine Starting...
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:137: 
Started: NaiveEngine Done...
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:133: 
Stopping: ThreadedEnginePooled
   
   terminate called after throwing an instance of 'std::system_error'
   
 what():  Operation not permitted
   
   /work/runtime_functions.sh: line 476: 7 Aborted (core 
dumped) build/tests/mxnet_unit_tests
   
   build.py: 2018-05-30 14:18:38,174 Running of command in container failed 
(134): nvidia-docker run --rm -t --shm-size=500m -v 
/home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 
mxnet/build.ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_gpu_cpp
   
   build.py: 2018-05-30 14:18:38,175 You can try to get into the container by 
using the following command: nvidia-docker run --rm -t --shm-size=500m -v 
/home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 -ti 
--entrypoint /bin/bash mxnet/build.ubuntu_gpu /work/runtime_functions.sh 
unittest_ubuntu_gpu_cpp
   
   into container: False
   
   Traceback (most recent call last):
   
 File "ci/build.py", line 307, in 
   
   sys.exit(main())
   
 File "ci/build.py", line 243, in main
   
   container_run(platform, docker_binary, shared_memory_size, command)
   
 File "ci/build.py", line 154, in container_run
   
   raise subprocess.CalledProcessError(ret, cmd)
   
   subprocess.CalledProcessError: Command 'nvidia-docker run --rm -t 
--shm-size=500m -v /home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 
mxnet/build.ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_gpu_cpp' 
returned non-zero exit status 134
   
   script returned exit code 1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model quantization

2018-05-30 Thread GitBox
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model 
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393196924
 
 
   @reminisce @zheng-da  We have resolved all the comments, would you help to 
check if you have further comments on the change?
   @marcoabreu @zheng-da We see a lot of Jenkins failure recently after 
submitting new change and most of the failure happens at CPP:GPU Unittest (see 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10433/51/pipeline/726/),
 we tried on our local GPU and everything is fine, and we tried to re-trigger 
the Jenkins and it can pass sometimes (not stable, sometimes need to re-trigger 
several times to pass Jenkins), I think our change should not impact CPP:GPU 
testing, would you help to check if this is an known issue for Jenkins system 
or MXNet base code? Or is there any way to debug the failure issue on Jenkins? 
Thanks. I copied the failure log as below for your reference:
   
   ```
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:133: 
Stopping: NaiveEngine
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:135: 
Stopped: NaiveEngine Starting...
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:137: 
Started: NaiveEngine Done...
   
   [14:18:36] /work/mxnet/tests/cpp/engine/threaded_engine_test.cc:133: 
Stopping: ThreadedEnginePooled
   
   terminate called after throwing an instance of 'std::system_error'
   
 what():  Operation not permitted
   
   /work/runtime_functions.sh: line 476: 7 Aborted (core 
dumped) build/tests/mxnet_unit_tests
   
   build.py: 2018-05-30 14:18:38,174 Running of command in container failed 
(134): nvidia-docker run --rm -t --shm-size=500m -v 
/home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 
mxnet/build.ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_gpu_cpp
   
   build.py: 2018-05-30 14:18:38,175 You can try to get into the container by 
using the following command: nvidia-docker run --rm -t --shm-size=500m -v 
/home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 -ti 
--entrypoint /bin/bash mxnet/build.ubuntu_gpu /work/runtime_functions.sh 
unittest_ubuntu_gpu_cpp
   
   into container: False
   
   Traceback (most recent call last):
   
 File "ci/build.py", line 307, in 
   
   sys.exit(main())
   
 File "ci/build.py", line 243, in main
   
   container_run(platform, docker_binary, shared_memory_size, command)
   
 File "ci/build.py", line 154, in container_run
   
   raise subprocess.CalledProcessError(ret, cmd)
   
   subprocess.CalledProcessError: Command 'nvidia-docker run --rm -t 
--shm-size=500m -v /home/jenkins_slave/workspace/ut-cpp-gpu:/work/mxnet -v 
/home/jenkins_slave/workspace/ut-cpp-gpu/build:/work/build -u 1001:1001 
mxnet/build.ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_gpu_cpp' 
returned non-zero exit status 134
   
   script returned exit code 1
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pwn1 opened a new issue #11098: Installation instructions MacOS/R/CPU don't work.

2018-05-30 Thread GitBox
pwn1 opened a new issue #11098: Installation instructions MacOS/R/CPU don't 
work.
URL: https://github.com/apache/incubator-mxnet/issues/11098
 
 
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   Doing the installation exactly as indicated on the website does not work -- 
fails with 404 Not Found error.
   
   ## Environment info (Required)
   
   --Python Info--
   ('Version  :', '2.7.15')
   ('Compiler :', 'GCC 4.2.1 Compatible Apple LLVM 9.1.0 
(clang-902.0.39.1)')
   ('Build:', ('default', 'May  1 2018 16:44:08'))
   ('Arch :', ('64bit', ''))
   Pip Info---
   ('Version  :', '10.0.1')
   ('Directory:', '/usr/local/lib/python2.7/site-packages/pip')
   --MXNet Info---
   No MXNet installed.
   --System Info--
   ('Platform :', 'Darwin-17.5.0-x86_64-i386-64bit')
   ('system   :', 'Darwin')
   ('node :', '8afbcfb5.cs.st-andrews.ac.uk')
   ('release  :', '17.5.0')
   ('version  :', 'Darwin Kernel Version 17.5.0: Fri Apr 13 19:32:32 PDT 
2018; root:xnu-4570.51.2~1/RELEASE_X86_64')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'i386')
   machdep.cpu.brand_string: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
   machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC 
MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
   machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 
BMI2 INVPCID FPU_CSDS
   machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0013 
sec, LOAD: 1.0522 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0137 sec, LOAD: 
0.4825 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0147 sec, LOAD: 0.7350 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0465 sec, 
LOAD: 0.0671 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1305 sec, LOAD: 
0.7929 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0606 sec, LOAD: 
0.7719 sec.
   
   
   Package used (Python/R/Scala/Julia):
   R
   
   For R user, please provide R `sessionInfo()`:
   R version 3.5.0 (2018-04-23)
   Platform: x86_64-apple-darwin15.6.0 (64-bit)
   Running under: macOS High Sierra 10.13.4
   
   Matrix products: default
   BLAS: 
/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
   LAPACK: 
/Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
   
   locale:
   [1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
   
   attached base packages:
   [1] stats graphics  grDevices utils datasets  methods   base 
   
   other attached packages:
   [1] palm_1.1.0  mvtnorm_1.0-7   twoplane_1.0
functional_0.6  HiddenMarkov_1.8-11 expm_0.999-2   
   [7] Matrix_1.2-14   boot_1.3-20 Rcpp_0.12.16   
   
   loaded via a namespace (and not attached):
[1] magic_1.5-8  ddalpha_1.3.2tidyr_0.8.0  
sfsmisc_1.1-2splines_3.5.0   
[6] foreach_1.4.4prodlim_2018.04.18   assertthat_0.2.0 
stats4_3.5.0 DRR_0.0.3   
   [11] robustbase_0.93-0ipred_0.9-6  pillar_1.2.2 
lattice_0.20-35  glue_1.2.0  
   [16] polyclip_1.6-1   minqa_1.2.4  colorspace_1.3-2 
recipes_0.1.2plyr_1.8.4  
   [21] psych_1.8.3.3timeDate_3043.102pkgconfig_2.0.1  
CVST_0.2-1   broom_0.4.4 
   [26] caret_6.0-79 purrr_0.2.4  scales_0.5.0 
tensor_1.5   gower_0.1.2 
   [31] lava_1.6.1   spatstat.utils_1.8-0 tibble_1.4.2 
mgcv_1.8-23  ggplot2_2.2.1   
   [36] withr_2.1.2  nnet_7.3-12  lazyeval_0.2.1   
mnormt_1.5-5 survival_2.41-3 
   [41] magrittr_1.5 deldir_0.1-15nlme_3.1-137 
MASS_7.3-49  gsl_1.9-10.3
   [46] dimRed_0.1.0 foreign_0.8-70   class_7.3-14 
tools_3.5.0  stringr_1.3.0   
   [51] kernlab_0.9-25   munsell_0.4.3bindrcpp_0.2.2   
compiler_3.5.0   RcppRoll_0.2.2  
   [56] rlang_0.2.0  grid_3.5.0   iterators_1.0.9  
goftest_1.1-1geometry_0.3-6  
   [61] gtable_0.2.0 ModelMetrics_1.1.0   codetools_0.2-15 
abind_1.4-5  reshape2_1.4.3  
   [66] R6_2.2.2 lubridate_1.7.4  

[GitHub] lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 
wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r191825737
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   [Here](https://github.com/xianyi/OpenBLAS/issues/296) is a proof that 
OpenBLAS does not have a separate lapack library.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] JeanKossaifi opened a new issue #11099: Fancy indexing with a list instead of a tuple

2018-05-30 Thread GitBox
JeanKossaifi opened a new issue #11099: Fancy indexing with a list instead of a 
tuple
URL: https://github.com/apache/incubator-mxnet/issues/11099
 
 
   Currently, MXNet's fancy indexing will fail when indexing with a list rather 
than a tuple (while it works with NumPy or PyTorch).
   
   Minimal example to reproduce the issue:
   
   ```python
   import mxnet.ndarray as nd
   
   # Creating a simple third order tensor
   a = nd.arange(24).reshape((3, 4, 2))
   
   # This works fine
   a[[1, 2, 1], slice(None), [0, 1, 2]]
   
   # This should work but does not
   a[[slice(None), [1, 2, 1], [0, 1, 2]]]
   
   # This works as expected
   a[tuple([slice(None), [1, 2, 1], [0, 1, 2]])]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jakobetzel commented on issue #10791: Unable to install mxnet in R 3.5.0

2018-05-30 Thread GitBox
jakobetzel commented on issue #10791: Unable to install mxnet in R 3.5.0
URL: 
https://github.com/apache/incubator-mxnet/issues/10791#issuecomment-393218491
 
 
   @jeremiedb I do apologise, I was trying to install on Ubuntu. Sorry for 
inconveniences.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg opened a new issue #11100: Installation instructions for Windows don't work

2018-05-30 Thread GitBox
lebeg opened a new issue #11100: Installation instructions for Windows don't 
work
URL: https://github.com/apache/incubator-mxnet/issues/11100
 
 
   In the [Installing MXNet in 
Windows](https://mxnet.incubator.apache.org/install/windows_setup.html) 
instructions there is a paragraph:
   
   > Installing the Prebuilt Package on Windows
   > MXNet provides a prebuilt package for Windows. The prebuilt package 
includes the MXNet library, all of the dependent third-party libraries, a 
sample C++ solution for Visual Studio, and the Python installation script. To 
install the prebuilt package:
   >
   > Download the latest prebuilt package from the **Releases** tab of MXNet.
   Unpack the package into a folder, with an appropriate name, such as D:\MXNet.
   Open the folder, and install the package by double-clicking 
**setupenv.cmd**. This sets up all of the environment variables required by 
MXNet.
   Test the installation by opening the provided sample C++ Visual Studio 
solution and building it.
   This produces a library called libmxnet.dll.
   
   **Issues**
   * There are no Prebuilt Packages published at the mentioned 
[Releases](https://github.com/apache/incubator-mxnet/releases) page
   * The mentioned `setupenv.cmd` can nowhere be found
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel

2018-05-30 Thread GitBox
szha commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r191837181
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   It’s not about openblas itself, but rather whether the library requires 
gfortran at runtime or not. If not, then a libgfortran’s .a file that was 
compiled with -fPIC is required. You mentioned static linking in the comment so 
I assumed that you meant a complete static linking. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
szha commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191838201
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   Just create symlinks such as libcblas and liblapack to the compiled 
libopenblas. There’s no need to hack around.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is forced on all platforms with OpenBLAS and c…

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11094: [MXNET-115] USE_LAPACK is 
forced on all platforms with OpenBLAS and c…
URL: https://github.com/apache/incubator-mxnet/pull/11094#discussion_r191840897
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -363,17 +364,17 @@ elseif(UNIX)
   list(APPEND mxnet_LINKER_LIBS pthread)
 endif()
 
+
 # ---[ LAPack
-if(USE_LAPACK AND NOT MSVC)
+if(USE_LAPACK)
   add_definitions(-DMXNET_USE_LAPACK=1)
-  list(APPEND mxnet_LINKER_LIBS lapack)
-else(USE_LAPACK)
-  # Workaround for Windows until using new Jenkinsfile.
-  if(BLAS STREQUAL "Open" OR BLAS STREQUAL "open" OR USE_BLAS STREQUAL "Open" 
OR USE_BLAS STREQUAL "open")
-add_definitions(-DMXNET_USE_LAPACK=1)
+  if (NOT MSVC)
+list(APPEND mxnet_LINKER_LIBS lapack)
 
 Review comment:
   Ok, this might be an option


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on a change in pull request #11041: gpu mem pool strategy

2018-05-30 Thread GitBox
ptrendx commented on a change in pull request #11041: gpu mem pool strategy
URL: https://github.com/apache/incubator-mxnet/pull/11041#discussion_r191843458
 
 

 ##
 File path: src/storage/pooled_storage_manager.h
 ##
 @@ -71,7 +78,7 @@ class GPUPooledStorageManager final : public StorageManager {
  private:
   void DirectFreeNoLock(Storage::Handle handle) {
 cudaError_t err = cudaFree(handle.dptr);
-size_t size = handle.size + NDEV;
 
 Review comment:
   Yes, that is correct. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 
wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r191843584
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   Static linking goes with openblas, but fortran needs to be linked 
dynamically. 
[Comment](https://github.com/xianyi/OpenBLAS/issues/460#issuecomment-61293128) 
about `-static-libgfortran` not working properly


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10433: [MXNET-290] MKLDNN support for model quantization

2018-05-30 Thread GitBox
zheng-da commented on issue #10433: [MXNET-290] MKLDNN support for model 
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393250740
 
 
   @marcoabreu i also saw this failure many times. do you have any idea what is 
going on?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on issue #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX 
models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#issuecomment-393256706
 
 
   Please create a folder inside python-pytest/onnx  for "import", and move all 
the import specific files in there.  "export" specific backend will be added 
pretty soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #11099: Fancy indexing with a list instead of a tuple

2018-05-30 Thread GitBox
reminisce commented on issue #11099: Fancy indexing with a list instead of a 
tuple
URL: 
https://github.com/apache/incubator-mxnet/issues/11099#issuecomment-393256981
 
 
   Thanks for reporting this. There are some concerns about inconsistent 
indexing behavior using lists in the numpy world.
   https://github.com/numpy/numpy/issues/4940
   https://github.com/numpy/numpy/issues/6564
   That's why we don't want to introduce this kind of inconsistency into mxnet. 
From those threads, you can see that it's encouraged to avoid using lists in 
numpy, so as in mxnet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191864714
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -148,21 +152,25 @@ def _fix_bias(op_name, attrs, num_inputs):
 raise ValueError("Unexpected number of inputs for: {}".format(op_name))
 return attrs
 
-def _fix_bias_shape(op_name, inputs, cls):
+def _fix_broadcast(op_name, inputs, broadcast_axis, cls):
 """A workaround to reshape bias term to (1, num_channel)."""
 if int(len(cls._params)) > 0:
 assert len(list(inputs)) == 2
-
-op_sym = symbol.reshape(inputs[1], shape=(1, -1, 1, 1))
+ 
+input0_shape = get_input_shape(inputs[0], cls)
+#creating reshape shape
+reshape_shape= list(len(input0_shape) * (1,))
+reshape_shape[broadcast_axis] = -1
+reshape_shape = tuple(reshape_shape)
+op_sym = symbol.reshape(inputs[1], shape=reshape_shape)
 if op_name == 'broadcast_add':
-op_sym = symbol.broadcast_add(op_sym, inputs[0])
+op_sym = symbol.broadcast_add(inputs[0], op_sym)
 elif op_name == 'broadcast_mul':
 
 Review comment:
   Calling "_fix_broadcast()" function for all broadcast_add, broadcast_sub, 
broadcast_mul, broadcast_div operations.
   Here, you are handling only broadcas_add and broadcast_mul


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191865487
 
 

 ##
 File path: tests/python-pytest/onnx/mxnet_backend_rep.py
 ##
 @@ -16,18 +16,19 @@
 # under the License.
 
 # coding: utf-8
-"""backend rep for onnx test infrastructure"""
+"""MXNet backend rep for onnx test infrastructure"""
 from collections import namedtuple
 
 Review comment:
   Not using anywhere, remove?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191863028
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_model.py
 ##
 @@ -52,3 +52,28 @@ def import_model(model_file):
 model_proto = onnx.load(model_file)
 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
 return sym, arg_params, aux_params
+
+def get_model_metadata(model_file):
+"""
+Returns the name and shape information of input and output tensors of the 
given ONNX model file.
+
+Parameters
+--
+model_file : str
+ONNX model file name
+
+Returns
+---
+model_metadata : dict
+A dictionary object mapping various metadata to its corresponding 
value.
+"""
+graph = GraphProto()
+try:
+import onnx
+except ImportError:
+raise ImportError("Onnx and protobuf need to be installed. "
+  + "Instructions to install - 
https://github.com/onnx/onnx";)
+model_proto = onnx.load(model_file)
 
 Review comment:
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191866364
 
 

 ##
 File path: tests/python-pytest/onnx/mxnet_backend.py
 ##
 @@ -74,80 +74,6 @@ def make_graph(node, inputs):
 
 return graph_proto
 
 
 Review comment:
   def make_graph(node, inputs) function not needed any more


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191865666
 
 

 ##
 File path: tests/python-pytest/onnx/gluon_backend.py
 ##
 @@ -0,0 +1,111 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+"""Gluon backend wrapper for onnx test infrastructure"""
+import mxnet as mx
+from mxnet import nd
+from mxnet.contrib.onnx._import.import_onnx import GraphProto
+import numpy as np
+try:
+from onnx import helper, TensorProto
+from onnx.backend.base import Backend
+except ImportError:
+raise ImportError("Onnx and protobuf need to be installed. Instructions to"
+  + " install - https://github.com/onnx/onnx#installation";)
+from gluon_backend_rep import GluonBackendRep
+
+# GluonBackend class will take an ONNX model with inputs, perform a 
computation,
+# and then return the output.
+# Implemented by following onnx docs guide:
+# https://github.com/onnx/onnx/blob/master/docs/ImplementingAnOnnxBackend.md
+
+class GluonBackend(Backend):
+"""Gluon backend for ONNX"""
+
+@staticmethod
+def make_graph(node, inputs):
 
 Review comment:
   Do we need this function ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
anirudhacharya commented on a change in pull request #10605: [MXNET-310] 
[ONNX-MXNet] API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191867452
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_model.py
 ##
 @@ -52,3 +52,28 @@ def import_model(model_file):
 model_proto = onnx.load(model_file)
 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
 return sym, arg_params, aux_params
+
+def get_model_metadata(model_file):
+"""
+Returns the name and shape information of input and output tensors of the 
given ONNX model file.
+
+Parameters
+--
+model_file : str
+ONNX model file name
+
+Returns
+---
+model_metadata : dict
+A dictionary object mapping various metadata to its corresponding 
value.
+"""
+graph = GraphProto()
+try:
+import onnx
+except ImportError:
+raise ImportError("Onnx and protobuf need to be installed. "
+  + "Instructions to install - 
https://github.com/onnx/onnx";)
+model_proto = onnx.load(model_file)
 
 Review comment:
   i will add the file check


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #11097: fix nd.backward when out_grad is None

2018-05-30 Thread GitBox
piiswrong commented on issue #11097: fix nd.backward when out_grad is None
URL: https://github.com/apache/incubator-mxnet/pull/11097#issuecomment-393262701
 
 
   could you add a test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #11097: fix nd.backward when out_grad is None

2018-05-30 Thread GitBox
piiswrong commented on issue #11097: fix nd.backward when out_grad is None
URL: https://github.com/apache/incubator-mxnet/pull/11097#issuecomment-393262851
 
 
   What would trigger this bug? I think x.backward() was working?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
reminisce commented on a change in pull request #10889: [MXNET-382] Shape and 
Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#discussion_r191872060
 
 

 ##
 File path: src/operator/tensor/elemwise_unary_op.h
 ##
 @@ -388,6 +388,43 @@ void CastCompute(const nnvm::NodeAttrs& attrs,
   });
 }
 
+template
+void ShapeCompute(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+  CHECK_EQ(inputs.size(), 1U);
+  CHECK_EQ(outputs.size(), 1U);
+  CHECK_EQ(req.size(), 1U);
+  const TBlob& in_data = inputs[0];
+  const TBlob& out_data = outputs[0];
+  mshadow::Stream *s = ctx.get_stream();
+  const TShape& in_shape = in_data.shape_;
+  MSHADOW_TYPE_SWITCH(out_data.type_flag_, DType, {
+mxnet_op::Kernel::Launch(
+  s, in_data.ndim(), out_data.dptr(), in_shape.data());
 
 Review comment:
   `in_shape.data` is a pointer in cpu memory which cannot be directly accessed 
on gpu. You can use `Shape` instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
reminisce commented on a change in pull request #10889: [MXNET-382] Shape and 
Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#discussion_r191868114
 
 

 ##
 File path: src/operator/mshadow_op.h
 ##
 @@ -96,6 +96,12 @@ struct identity_with_cast {
   }
 };
 
+struct size_kernel {
+  MSHADOW_XINLINE static void Map(int i, int64_t *out, unsigned int in) {
+out[0] = int64_t(in);
 
 Review comment:
   nit: `static_cast`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: setup.sh and fix visualization in dqn_run_test.py (#11051)

2018-05-30 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 005f677  setup.sh and fix visualization in dqn_run_test.py (#11051)
005f677 is described below

commit 005f67759fac7bcf451e31b42c30b6c6ca24586a
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Thu May 31 03:36:29 2018 +0900

setup.sh and fix visualization in dqn_run_test.py (#11051)

fix type error: type of action needs to be int
---
 example/reinforcement-learning/dqn/README.md   | Bin 2146 -> 2230 bytes
 example/reinforcement-learning/dqn/dqn_run_test.py |   8 +---
 example/reinforcement-learning/dqn/setup.sh|   7 ++-
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/example/reinforcement-learning/dqn/README.md 
b/example/reinforcement-learning/dqn/README.md
index fd32667..4547904 100644
Binary files a/example/reinforcement-learning/dqn/README.md and 
b/example/reinforcement-learning/dqn/README.md differ
diff --git a/example/reinforcement-learning/dqn/dqn_run_test.py 
b/example/reinforcement-learning/dqn/dqn_run_test.py
old mode 100644
new mode 100755
index 2abf273..e8f36b9
--- a/example/reinforcement-learning/dqn/dqn_run_test.py
+++ b/example/reinforcement-learning/dqn/dqn_run_test.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -89,8 +91,8 @@ def calculate_avg_reward(game, qnet, test_steps=125000, 
exploartion=0.05):
 current_state = game.current_state()
 state = nd.array(current_state.reshape((1,) + 
current_state.shape),
  ctx=qnet.ctx) / float(255.0)
-action = nd.argmax_channel(
-qnet.forward(is_train=False, data=state)[0]).asscalar()
+action = int(nd.argmax_channel(
+qnet.forward(is_train=False, 
data=state)[0]).asscalar())
 else:
 action = npy_rng.randint(action_num)
 
@@ -120,7 +122,7 @@ def main():
 help='Running Context. E.g `-c gpu` or `-c gpu1` or 
`-c cpu`')
 parser.add_argument('-e', '--epoch-range', required=False, type=str, 
default='22',
 help='Epochs to run testing. E.g `-e 0,80`, `-e 
0,80,2`')
-parser.add_argument('-v', '--visualization', required=False, type=int, 
default=0,
+parser.add_argument('-v', '--visualization', action='store_true',
 help='Visualize the runs.')
 parser.add_argument('--symbol', required=False, type=str, default="nature",
 help='type of network, nature or nips')
diff --git a/example/reinforcement-learning/dqn/setup.sh 
b/example/reinforcement-learning/dqn/setup.sh
index 3fcfacb..3069fef 100755
--- a/example/reinforcement-learning/dqn/setup.sh
+++ b/example/reinforcement-learning/dqn/setup.sh
@@ -22,9 +22,14 @@ set -x
 
 pip install opencv-python
 pip install scipy
+pip install pygame
 
 # Install arcade learning environment
-sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev cmake
+if [[ "$OSTYPE" == "linux-gnu" ]]; then
+sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev 
cmake
+elif [[ "$OSTYPE" == "darwin"* ]]; then
+brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
+fi
 git clone g...@github.com:mgbellemare/Arcade-Learning-Environment.git || true
 pushd .
 cd Arcade-Learning-Environment

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong closed pull request #11051: Fix DQN example

2018-05-30 Thread GitBox
piiswrong closed pull request #11051: Fix DQN example
URL: https://github.com/apache/incubator-mxnet/pull/11051
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/reinforcement-learning/dqn/README.md 
b/example/reinforcement-learning/dqn/README.md
index fd32667a1f8..4547904b595 100644
Binary files a/example/reinforcement-learning/dqn/README.md and 
b/example/reinforcement-learning/dqn/README.md differ
diff --git a/example/reinforcement-learning/dqn/dqn_run_test.py 
b/example/reinforcement-learning/dqn/dqn_run_test.py
old mode 100644
new mode 100755
index 2abf273978f..e8f36b97976
--- a/example/reinforcement-learning/dqn/dqn_run_test.py
+++ b/example/reinforcement-learning/dqn/dqn_run_test.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -89,8 +91,8 @@ def calculate_avg_reward(game, qnet, test_steps=125000, 
exploartion=0.05):
 current_state = game.current_state()
 state = nd.array(current_state.reshape((1,) + 
current_state.shape),
  ctx=qnet.ctx) / float(255.0)
-action = nd.argmax_channel(
-qnet.forward(is_train=False, data=state)[0]).asscalar()
+action = int(nd.argmax_channel(
+qnet.forward(is_train=False, 
data=state)[0]).asscalar())
 else:
 action = npy_rng.randint(action_num)
 
@@ -120,7 +122,7 @@ def main():
 help='Running Context. E.g `-c gpu` or `-c gpu1` or 
`-c cpu`')
 parser.add_argument('-e', '--epoch-range', required=False, type=str, 
default='22',
 help='Epochs to run testing. E.g `-e 0,80`, `-e 
0,80,2`')
-parser.add_argument('-v', '--visualization', required=False, type=int, 
default=0,
+parser.add_argument('-v', '--visualization', action='store_true',
 help='Visualize the runs.')
 parser.add_argument('--symbol', required=False, type=str, default="nature",
 help='type of network, nature or nips')
diff --git a/example/reinforcement-learning/dqn/setup.sh 
b/example/reinforcement-learning/dqn/setup.sh
index 3fcfacbe0a7..3069fef62ec 100755
--- a/example/reinforcement-learning/dqn/setup.sh
+++ b/example/reinforcement-learning/dqn/setup.sh
@@ -22,9 +22,14 @@ set -x
 
 pip install opencv-python
 pip install scipy
+pip install pygame
 
 # Install arcade learning environment
-sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev cmake
+if [[ "$OSTYPE" == "linux-gnu" ]]; then
+sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev 
cmake
+elif [[ "$OSTYPE" == "darwin"* ]]; then
+brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
+fi
 git clone g...@github.com:mgbellemare/Arcade-Learning-Environment.git || true
 pushd .
 cd Arcade-Learning-Environment


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
piiswrong commented on issue #10889: [MXNET-382] Shape and Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#issuecomment-393273816
 
 
   shape_nd still sounds weird as it's also available in symbol.
   
   BTW I think these operators can be useful but they won't solve the issue 
#10789


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#issuecomment-393278649
 
 
   @piiswrong Yes, it will not solve the issue. We will also need changes to 
define the gradient of a shape operator. Currently shape and size have no 
gradient defined. Also we will need changes in nnvm to propagate this 
information during back-propagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #11025: added ravel/unravel operators

2018-05-30 Thread GitBox
piiswrong closed pull request #11025: added ravel/unravel operators
URL: https://github.com/apache/incubator-mxnet/pull/11025
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 5bc3c52f2a7..323344d69c0 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -430,6 +430,8 @@ The `ndarray` package provides several classes:
 one_hot
 pick
 where
+ravel_multi_index
+unravel_index
 ```
 
 ## Mathematical functions
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index f1e90a0c4d3..cc63e13e6ec 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -291,6 +291,8 @@ Composite multiple symbols into a new one by an operator.
 Symbol.take
 Symbol.one_hot
 Symbol.pick
+Symbol.ravel_multi_index
+Symbol.unravel_index
 ```
 
 ### Get internal and output symbol
diff --git a/src/operator/tensor/ravel.cc b/src/operator/tensor/ravel.cc
new file mode 100644
index 000..94e38948434
--- /dev/null
+++ b/src/operator/tensor/ravel.cc
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file ravel.cc
+ * \brief CPU-operators for ravel/unravel.
+ */
+#include "./ravel.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(RavelParam);
+
+NNVM_REGISTER_OP(_ravel_multi_index)
+.add_alias("ravel_multi_index")
+.describe(R"code(Converts a batch of index arrays into an array of flat 
indices. The operator follows numpy conventions so a single multi index is 
given by a column of the input matrix. 
+
+Examples::
+   
+   A = [[3,6,6],[4,5,1]]
+   ravel(A, shape=(7,6)) = [22,41,37]
+
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FResourceRequest", [](const NodeAttrs& attrs)
+  { return std::vector{ResourceRequest::kTempSpace}; })
+.set_attr("FListInputNames", [](const NodeAttrs& attrs)
+  { return std::vector{"data"}; } )
+.set_attr("FInferShape", RavelOpShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", RavelForward)
+.set_attr("FGradient", MakeZeroGradNodes)
+.add_argument("data", "NDArray-or-Symbol", "Batch of multi-indices")
+.add_arguments(RavelParam::__FIELDS__());
+
+NNVM_REGISTER_OP(_unravel_index)
+.add_alias("unravel_index")
+.describe(R"code(Converts an array of flat indices into a batch of index 
arrays. The operator follows numpy conventions so a single multi index is given 
by a column of the output matrix.
+
+Examples::
+
+   A = [22,41,37]
+   unravel(A, shape=(7,6)) = [[3,6,6],[4,5,1]]
+
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FResourceRequest", [](const NodeAttrs& attrs)
+  { return std::vector{ResourceRequest::kTempSpace}; })
+.set_attr("FListInputNames", [](const NodeAttrs& attrs)
+  { return std::vector{"data"}; } )
+.set_attr("FInferShape", UnravelOpShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", UnravelForward)
+.set_attr("FGradient", MakeZeroGradNodes)
+.add_argument("data", "NDArray-or-Symbol", "Array of flat indices")
+.add_arguments(RavelParam::__FIELDS__());
+
+}  // namespace op
+}  // namespace mxnet
diff --git a/src/operator/tensor/ravel.cu b/src/operator/tensor/ravel.cu
new file mode 100644
index 000..cae50482013
--- /dev/null
+++ b/src/operator/tensor/ravel.cu
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org

[GitHub] piiswrong closed issue #10203: [Operator] unravel_index and ravel_multi_index

2018-05-30 Thread GitBox
piiswrong closed issue #10203: [Operator] unravel_index and ravel_multi_index
URL: https://github.com/apache/incubator-mxnet/issues/10203
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: added ravel/unravel operators (#11025)

2018-05-30 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 5109b00  added ravel/unravel operators (#11025)
5109b00 is described below

commit 5109b00b2473fa26de036ed21775b214e09d1bbc
Author: moin 
AuthorDate: Wed May 30 20:59:16 2018 +0200

added ravel/unravel operators (#11025)
---
 docs/api/python/ndarray/ndarray.md |   2 +
 docs/api/python/symbol/symbol.md   |   2 +
 src/operator/tensor/ravel.cc   |  81 
 src/operator/tensor/ravel.cu   |  36 +++
 src/operator/tensor/ravel.h| 166 +
 tests/python/unittest/test_operator.py |  15 +++
 6 files changed, 302 insertions(+)

diff --git a/docs/api/python/ndarray/ndarray.md 
b/docs/api/python/ndarray/ndarray.md
index 5bc3c52..323344d 100644
--- a/docs/api/python/ndarray/ndarray.md
+++ b/docs/api/python/ndarray/ndarray.md
@@ -430,6 +430,8 @@ The `ndarray` package provides several classes:
 one_hot
 pick
 where
+ravel_multi_index
+unravel_index
 ```
 
 ## Mathematical functions
diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md
index f1e90a0..cc63e13 100644
--- a/docs/api/python/symbol/symbol.md
+++ b/docs/api/python/symbol/symbol.md
@@ -291,6 +291,8 @@ Composite multiple symbols into a new one by an operator.
 Symbol.take
 Symbol.one_hot
 Symbol.pick
+Symbol.ravel_multi_index
+Symbol.unravel_index
 ```
 
 ### Get internal and output symbol
diff --git a/src/operator/tensor/ravel.cc b/src/operator/tensor/ravel.cc
new file mode 100644
index 000..94e3894
--- /dev/null
+++ b/src/operator/tensor/ravel.cc
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file ravel.cc
+ * \brief CPU-operators for ravel/unravel.
+ */
+#include "./ravel.h"
+
+namespace mxnet {
+namespace op {
+
+DMLC_REGISTER_PARAMETER(RavelParam);
+
+NNVM_REGISTER_OP(_ravel_multi_index)
+.add_alias("ravel_multi_index")
+.describe(R"code(Converts a batch of index arrays into an array of flat 
indices. The operator follows numpy conventions so a single multi index is 
given by a column of the input matrix. 
+
+Examples::
+   
+   A = [[3,6,6],[4,5,1]]
+   ravel(A, shape=(7,6)) = [22,41,37]
+
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FResourceRequest", [](const NodeAttrs& attrs)
+  { return std::vector{ResourceRequest::kTempSpace}; })
+.set_attr("FListInputNames", [](const NodeAttrs& attrs)
+  { return std::vector{"data"}; } )
+.set_attr("FInferShape", RavelOpShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", RavelForward)
+.set_attr("FGradient", MakeZeroGradNodes)
+.add_argument("data", "NDArray-or-Symbol", "Batch of multi-indices")
+.add_arguments(RavelParam::__FIELDS__());
+
+NNVM_REGISTER_OP(_unravel_index)
+.add_alias("unravel_index")
+.describe(R"code(Converts an array of flat indices into a batch of index 
arrays. The operator follows numpy conventions so a single multi index is given 
by a column of the output matrix.
+
+Examples::
+
+   A = [22,41,37]
+   unravel(A, shape=(7,6)) = [[3,6,6],[4,5,1]]
+
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FResourceRequest", [](const NodeAttrs& attrs)
+  { return std::vector{ResourceRequest::kTempSpace}; })
+.set_attr("FListInputNames", [](const NodeAttrs& attrs)
+  { return std::vector{"data"}; } )
+.set_attr("FInferShape", UnravelOpShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", UnravelForward)
+.set_attr("FGradient", MakeZeroGradNodes)
+.add_argument("data", "NDArray-or-Symbol", "Array of flat indices")
+.add_arguments(RavelParam::__FIELDS__());
+
+}  // namespace op
+}  // namespace mxnet
diff --git a/src/operator/tensor/ravel.cu b/src/operator/tensor/ravel.cu
new file mode 100644
index 000..cae5048
--- /dev/null
+++ b/src/operator/tensor/ravel.cu
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache S

[GitHub] anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#issuecomment-393278649
 
 
   @piiswrong Yes, it will not solve the issue. We will also need changes to 
define the gradient of a shape operator. Currently shape and size have no 
gradient defined. Also we will need changes in nnvm to propagate this 
information during back-propagation.
   
   gather_nd is also available in symbol - 
http://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.gather_nd
   
   but I am open other name suggestions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #10970: [MXNET-424] dtype option for multinomial

2018-05-30 Thread GitBox
piiswrong commented on a change in pull request #10970: [MXNET-424] dtype 
option for multinomial
URL: https://github.com/apache/incubator-mxnet/pull/10970#discussion_r191885873
 
 

 ##
 File path: src/operator/random/sample_multinomial_op.h
 ##
 @@ -67,6 +70,10 @@ inline bool SampleMultinomialOpShape(const nnvm::NodeAttrs& 
attrs,
   const TShape& ishape = (*in_attrs)[0];
   if (!ishape.ndim()) return false;
 
+  MSHADOW_TYPE_SWITCH(param.dtype, DType, {
+CHECK_LE(ishape[ishape.ndim() - 1], 
mxnet::common::MaxIntegerValue());
 
 Review comment:
   Need to output a message saying why it failed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10970: [MXNET-424] dtype option for multinomial

2018-05-30 Thread GitBox
piiswrong commented on issue #10970: [MXNET-424] dtype option for multinomial
URL: https://github.com/apache/incubator-mxnet/pull/10970#issuecomment-393279769
 
 
   otherwise LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191898713
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_model.py
 ##
 @@ -53,6 +53,31 @@ def import_model(model_file):
 sym, arg_params, aux_params = graph.from_onnx(model_proto.graph)
 return sym, arg_params, aux_params
 
+def get_model_metadata(model_file):
+"""
+Returns the name and shape information of input and output tensors of the 
given ONNX model file.
+
+Parameters
+--
+model_file : str
+ONNX model file name
+
+Returns
+---
+model_metadata : dict
+A dictionary object mapping various metadata to its corresponding 
value.
+"""
+graph = GraphProto()
+try:
+import onnx
+except ImportError:
+raise ImportError("Onnx and protobuf need to be installed. "
+  + "Instructions to install - 
https://github.com/onnx/onnx";)
+model_proto = onnx.load(model_file)
+
+metadata = graph.get_graph_metadata(model_proto.graph)
+return metadata
+
 def get_model_metadata(model_file):
 
 Review comment:
   this method is repeated?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
Roshrini commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191900033
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_onnx.py
 ##
 @@ -155,6 +157,40 @@ def get_graph_metadata(self, graph):
}
 return metadata
 
+def graph_to_gluon(self, graph, context):
+"""Construct SymbolBlock from onnx graph.
+
+Parameters
+--
+graph : onnx protobuf object
+The loaded onnx graph
+context : str
+context for mxnet module object. Should be 'CPU' or 'GPU'
+
+Returns
+---
+sym_block :gluon.nn.SymbolBlock
+The returned gluon SymbolBlock
+"""
+sym, arg_params, aux_params = self.from_onnx(graph)
+metadata = self.get_graph_metadata(graph)
+data_names = [input_tensor[0] for input_tensor in 
metadata['input_tensor_data']]
+data_inputs = [symbol.var(data_name) for data_name in data_names]
+
+ctx = gpu() if context == 'GPU' else cpu()
 
 Review comment:
   from  import cpu, gpu


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-30 Thread GitBox
marcoabreu commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-393307188
 
 
   Could you please also put the publish job into the Jenkinsfile structure?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-30 Thread GitBox
marcoabreu commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-393307188
 
 
   Could you please also put the publish job into the Jenkinsfile structure - 
or maybe just combine both parts into one job?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #11079: Is it possible to apply intersection/union on mx.nd.array

2018-05-30 Thread GitBox
reminisce commented on issue #11079: Is it possible to apply intersection/union 
on mx.nd.array
URL: 
https://github.com/apache/incubator-mxnet/issues/11079#issuecomment-393311570
 
 
   You can call the operators explicitly: `mx.nd.logical_and(a, b)` and 
`mx.nd.logical_or(a, b)`. GPU is supported.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #11032: Problem with load lstm model with mx.Module's bind

2018-05-30 Thread GitBox
reminisce commented on issue #11032: Problem with load lstm model with 
mx.Module's bind
URL: 
https://github.com/apache/incubator-mxnet/issues/11032#issuecomment-393318792
 
 
   My guess is that some operator's shape inference logic is not complete. You 
mentioned it can be loaded by another version MXNet, which version is that? If 
possible, can you provide the sym file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191930582
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -514,8 +514,9 @@ integrationtest_ubuntu_cpu_onnx() {
set -ex
export PYTHONPATH=./python/
python example/onnx/super_resolution.py
-   pytest tests/python-pytest/onnx/onnx_backend_test.py
+   pytest tests/python-pytest/onnx/mxnet_backend_test.py
pytest tests/python-pytest/onnx/onnx_test.py
+   pytest tests/python-pytest/onnx/gluon_backend_test.py
 
 Review comment:
   same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
spidyDev commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] 
API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191930536
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -514,8 +514,9 @@ integrationtest_ubuntu_cpu_onnx() {
set -ex
export PYTHONPATH=./python/
python example/onnx/super_resolution.py
-   pytest tests/python-pytest/onnx/onnx_backend_test.py
+   pytest tests/python-pytest/onnx/mxnet_backend_test.py
 
 Review comment:
   shoudnt this be onnx/import/mxnet_backend_test.py


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KaiserSozo opened a new issue #11101: Gluon Performance and memory conumption

2018-05-30 Thread GitBox
KaiserSozo opened a new issue #11101: Gluon Performance and memory conumption
URL: https://github.com/apache/incubator-mxnet/issues/11101
 
 
   Working under gpu, I have next code:
   
   for i, (data) in enumerate(trainingInputs):
   calcT = time.time()
   data = data.as_in_context(ctx)
   output, win_index, delta, mask = netSom(data)
   calc += time.time() - calcT
   copyT = time.time()
   weightsData = weights.data()
   ratesData = rates.data()
   ratesData[win_index] += 1
   weightsData[win_index] += delta
   ratesData.wait_to_read()
   weightsData.wait_to_read()
   train_accuracy += output.asscalar()
   copy += time.time() - copyT
   
   Calculation time that is in calc variable is 5 times less than copy time 
that is in copy variable. Why o an how it can be reduced?
   Also I noted that if I remove calling of wait_to_read() function then copy 
time is 0, but memory consumption always increasing and leads to memory 
allocation failure. And almost the same behaviour I see in next code using 
gluon:
   
   for data, label in itertools.izip(trainingInputs, trainingOutputs):
   calcT = time.time()
   data = data.as_in_context(ctx)
   label = label.as_in_context(ctx)
   output, win_index, delta, mask = netSom(data)
   data = data.reshape((-1,inputsCount))
   with autograd.record():
   args = (data, mask)
   output = net(*args)
   l2loss = loss(output, label)
   l2loss.backward()
   calc += time.time() - calcT
   copyT = time.time()
   trainer.step(data.shape[0])
   copy += time.time() - copyT
   i+=1
   
   testT = time.time()
   test_accuracy = evaluate_accuracyMLP(testInputs, testOutputs, net, 
netSom, inputsCount, activeNeuronsCount)
   test += time.time() - testT
   
   Here calculation and copying (gradient adjusting) times are almost equal 
each other. And I also searching the way of decreasing copy time, it should be 
serious less than calculation time. And here also persists strange behaviour, 
if I remove 'evaluate_accuracyMLP' call, memory consumption become increase 
till error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil opened a new pull request #11102: Updating readme to fix broken docs status

2018-05-30 Thread GitBox
ThomasDelteil opened a new pull request #11102: Updating readme to fix broken 
docs status
URL: https://github.com/apache/incubator-mxnet/pull/11102
 
 
   Current readme shows a broken docs build icon because the job has been 
removed, fixing and adding a table to know what build is what.
   
   **previous**
   ![screen shot 2018-05-30 at 2 57 07 
pm](https://user-images.githubusercontent.com/3716307/40750041-cedc7b0c-641a-11e8-851a-b74950ac22b7.png)
   
   **new**
   
   ![screen shot 2018-05-30 at 2 57 16 
pm](https://user-images.githubusercontent.com/3716307/40750058-da68d1aa-641a-11e8-9097-32797b79372c.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce closed issue #10277: Flaky test_random.test_shuffle

2018-05-30 Thread GitBox
reminisce closed issue #10277: Flaky test_random.test_shuffle
URL: https://github.com/apache/incubator-mxnet/issues/10277
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9686: APIs that might be a good idea to break in 2.0

2018-05-30 Thread GitBox
szha commented on issue #9686: APIs that might be a good idea to break in 2.0
URL: 
https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-393338016
 
 
   #10807 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #10455: Bug of group2ctxs for model parallelism

2018-05-30 Thread GitBox
rahul003 commented on issue #10455: Bug of group2ctxs for model parallelism
URL: 
https://github.com/apache/incubator-mxnet/issues/10455#issuecomment-392881423
 
 
   Could you provide the script magic.py and the softmax implementation as an 
example to investigate?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 wheel

2018-05-30 Thread GitBox
lebeg commented on a change in pull request #11053: [MXNET-244] Fixed armv7 
wheel
URL: https://github.com/apache/incubator-mxnet/pull/11053#discussion_r191941010
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -83,31 +102,40 @@ build_armv6() {
 -DBUILD_CPP_EXAMPLES=OFF \
 -Dmxnet_LINKER_LIBS=-lgfortran \
 -G Ninja /work/mxnet
+
 ninja
-export MXNET_LIBRARY_PATH=`pwd`/libmxnet.so
-cd /work/mxnet/python
-python setup.py bdist_wheel --universal
-cp dist/*.whl /work/build
+build_wheel
+
 popd
 }
 
 build_armv7() {
 set -ex
 pushd .
 cd /work/build
-cmake\
--DUSE_CUDA=OFF\
--DUSE_OPENCV=OFF\
--DUSE_OPENMP=OFF\
--DUSE_SIGNAL_HANDLER=ON\
--DCMAKE_BUILD_TYPE=RelWithDebInfo\
--DUSE_MKL_IF_AVAILABLE=OFF\
+
+# Lapack functionality will be included and statically linked to openblas.
 
 Review comment:
   Anyway we will look into the mentioned issue above, no doubt. Do you guys 
see it as possible to introduce the improvement in a follow up PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator

2018-05-30 Thread GitBox
anirudhacharya commented on issue #10889: [MXNET-382] Shape and Size Operator
URL: https://github.com/apache/incubator-mxnet/pull/10889#issuecomment-393278649
 
 
   @piiswrong Yes, it will not solve the issue. We will also need changes to 
define the gradient of a shape operator. Currently shape and size have no 
gradient defined. Also we will need changes in nnvm to propagate this 
information during back-propagation.
   
   gather_nd is also available in symbol - 
http://mxnet.incubator.apache.org/api/python/symbol/symbol.html#mxnet.symbol.gather_nd
   
   but I am open to other name suggestions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #11098: Installation instructions MacOS/R/CPU don't work.

2018-05-30 Thread GitBox
anirudhacharya commented on issue #11098: Installation instructions MacOS/R/CPU 
don't work.
URL: 
https://github.com/apache/incubator-mxnet/issues/11098#issuecomment-393342377
 
 
   can you try the following link - 
``install.packages("https://s3.ca-central-1.amazonaws.com/jeremiedb/share/mxnet/CPU/mxnet.zip";,
 repos = NULL)``
   
   @eric-haibin-lin please label this - "R", "Installation"


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #11093: Some wrong with mxnet on spark: params.jars = jars.split(", |:")

2018-05-30 Thread GitBox
anirudhacharya commented on issue #11093: Some wrong with mxnet on spark: 
params.jars = jars.split(",|:")
URL: 
https://github.com/apache/incubator-mxnet/issues/11093#issuecomment-393343662
 
 
   @nswamy 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new issue #11103: Problem with test_contrib_io.test_contrib_DataLoaderIter

2018-05-30 Thread GitBox
haojin2 opened a new issue #11103: Problem with 
test_contrib_io.test_contrib_DataLoaderIter
URL: https://github.com/apache/incubator-mxnet/issues/11103
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-10931/11/pipeline/719
   
   ==
   
   ERROR: test_contrib_io.test_contrib_DataLoaderIter
   
   --
   
   Traceback (most recent call last):
   
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   
   self.test(*self.arg)
   
 File "/work/mxnet/tests/python/unittest/test_contrib_io.py", line 39, in 
test_contrib_DataLoaderIter
   
   test_mnist_batches(50, num_examples // 50, 'discard')
   
 File "/work/mxnet/tests/python/unittest/test_contrib_io.py", line 26, in 
test_mnist_batches
   
   dataset = MNIST(train=False)
   
 File "/work/mxnet/python/mxnet/gluon/data/vision/datasets.py", line 66, in 
__init__
   
   super(MNIST, self).__init__(root, transform)
   
 File "/work/mxnet/python/mxnet/gluon/data/dataset.py", line 197, in 
__init__
   
   self._get_data()
   
 File "/work/mxnet/python/mxnet/gluon/data/vision/datasets.py", line 77, in 
_get_data
   
   sha1_hash=data[1])
   
 File "/work/mxnet/python/mxnet/gluon/utils.py", line 212, in download
   
   raise RuntimeError("Failed downloading url %s"%url)
   
   RuntimeError: Failed downloading url 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/mnist/t10k-images-idx3-ubyte.gz
   
    >> begin captured stdout << -
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11066: migrating docs build and publish job to secure nodes

2018-05-30 Thread GitBox
aaronmarkham commented on issue #11066: migrating docs build and publish job to 
secure nodes
URL: https://github.com/apache/incubator-mxnet/pull/11066#issuecomment-393346372
 
 
   I wanted to get this first part working on the secure slaves. Then circle 
back with combining the job if I can manage to get the credentials part working.
   So can we push this through as-is, now that you've fixed the publish job?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
Roshrini commented on issue #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX 
models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#issuecomment-393347493
 
 
   Can you rename  'tests/python-pytest/onnx/import/onnx_test.py' to  
'tests/python-pytest/onnx/import/onnx_import_test.py'? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #11102: Updating readme to fix broken docs status

2018-05-30 Thread GitBox
aaronmarkham commented on issue #11102: Updating readme to fix broken docs 
status
URL: https://github.com/apache/incubator-mxnet/pull/11102#issuecomment-393350590
 
 
   Thanks Thomas!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #11104: allow int shape in parameter

2018-05-30 Thread GitBox
szha opened a new pull request #11104: allow int shape in parameter
URL: https://github.com/apache/incubator-mxnet/pull/11104
 
 
   ## Description ##
   allow int shape in Gluon parameter
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated.
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] allow int shape in parameter


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-05-30 Thread GitBox
aaronmarkham commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r191953224
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,229 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) USE_OPENCV=0 USE_OPENMP=1 USE_MKLDNN=1 
USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
 
 Review comment:
   Is there an issue with using the latest Visual Studio 2017 Community Edition?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
anirudhacharya commented on a change in pull request #10605: [MXNET-310] 
[ONNX-MXNet] API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191953871
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_to_gluon.py
 ##
 @@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# coding: utf-8
+"""Import ONNX model to gluon interface"""
+from .import_onnx import GraphProto
+
+def import_to_gluon(model_file):
+"""
+Imports the ONNX model files, passed as a parameter, into Gluon 
SymbolBlock object.
+
+Parameters
+--
+model_file : str
+ONNX model file name
+
+Returns
+---
+sym_block : :class:`~mxnet.gluon.SymbolBlock`
+A SymbolBlock object representing the given model file.
+"""
+graph = GraphProto()
+try:
+import onnx
+except ImportError:
+raise ImportError("Onnx and protobuf need to be installed. 
Instructions to"
+  + " install - 
https://github.com/onnx/onnx#installation";)
+model_proto = onnx.load(model_file)
 
 Review comment:
   not needed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
anirudhacharya commented on a change in pull request #10605: [MXNET-310] 
[ONNX-MXNet] API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191953906
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/translation_utils.py
 ##
 @@ -148,21 +152,25 @@ def _fix_bias(op_name, attrs, num_inputs):
 raise ValueError("Unexpected number of inputs for: {}".format(op_name))
 return attrs
 
-def _fix_bias_shape(op_name, inputs, cls):
+def _fix_broadcast(op_name, inputs, broadcast_axis, cls):
 """A workaround to reshape bias term to (1, num_channel)."""
 if int(len(cls._params)) > 0:
 assert len(list(inputs)) == 2
-
-op_sym = symbol.reshape(inputs[1], shape=(1, -1, 1, 1))
+ 
+input0_shape = get_input_shape(inputs[0], cls)
+#creating reshape shape
+reshape_shape= list(len(input0_shape) * (1,))
+reshape_shape[broadcast_axis] = -1
+reshape_shape = tuple(reshape_shape)
+op_sym = symbol.reshape(inputs[1], shape=reshape_shape)
 if op_name == 'broadcast_add':
-op_sym = symbol.broadcast_add(op_sym, inputs[0])
+op_sym = symbol.broadcast_add(inputs[0], op_sym)
 elif op_name == 'broadcast_mul':
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on a change in pull request #10605: [MXNET-310] [ONNX-MXNet] API to import ONNX models into Gluon.

2018-05-30 Thread GitBox
anirudhacharya commented on a change in pull request #10605: [MXNET-310] 
[ONNX-MXNet] API to import ONNX models into Gluon.
URL: https://github.com/apache/incubator-mxnet/pull/10605#discussion_r191953923
 
 

 ##
 File path: python/mxnet/contrib/onnx/_import/import_onnx.py
 ##
 @@ -155,6 +157,40 @@ def get_graph_metadata(self, graph):
}
 return metadata
 
+def graph_to_gluon(self, graph, context):
+"""Construct SymbolBlock from onnx graph.
+
+Parameters
+--
+graph : onnx protobuf object
+The loaded onnx graph
+context : str
+context for mxnet module object. Should be 'CPU' or 'GPU'
+
+Returns
+---
+sym_block :gluon.nn.SymbolBlock
+The returned gluon SymbolBlock
+"""
+sym, arg_params, aux_params = self.from_onnx(graph)
+metadata = self.get_graph_metadata(graph)
+data_names = [input_tensor[0] for input_tensor in 
metadata['input_tensor_data']]
+data_inputs = [symbol.var(data_name) for data_name in data_names]
+
+ctx = gpu() if context == 'GPU' else cpu()
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] indhub commented on issue #11069: [MXNET-480] New version select for Install page

2018-05-30 Thread GitBox
indhub commented on issue #11069: [MXNET-480] New version select for Install 
page
URL: https://github.com/apache/incubator-mxnet/pull/11069#issuecomment-393354028
 
 
   **Problem:**
   
   If I have to give someone a link to install the latest stable Python version 
on GPU, I guess I will have to give this URL:
   
http://54.210.6.225/install/index.html?version=v1.2.0&device=Linux&language=Python&processor=GPU
   
   Note that this URL will no longer install the latest version after 1.3 is 
released.
   
   **Suggested solution:**
   
   Version selector can have one more option called 'Latest Stable (1.2)'. This 
should be the default selection. When user selects this, URL should not contain 
the 'version=' argument and install command must not mention any version. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kpmurali opened a new pull request #11105: Navbar community fix

2018-05-30 Thread GitBox
kpmurali opened a new pull request #11105: Navbar community fix
URL: https://github.com/apache/incubator-mxnet/pull/11105
 
 
   ## Description ##
   The community drop-down doesn't work properly when the screen width is 
drastically reduced. This is because it's sub-menu was never added to the 
display logic. The clicking of the sub-menu closes out the menu. This PR fixes 
these issues
   
   ## Checklist ##
   ### Changes ###
   - [ x ] Community drop-down now works when the screen-width is reduced and 
it's added to the Plus icon
   - [ x ] The sub-menu under the burger icon won't disappear on click when the 
screen is width is reduced
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev opened a new pull request #11106: [ONNX] Added Unsqueeze operator import support

2018-05-30 Thread GitBox
spidyDev opened a new pull request #11106: [ONNX] Added Unsqueeze operator 
import support
URL: https://github.com/apache/incubator-mxnet/pull/11106
 
 
   ## Description ##
   ONNX Unsqueeze op maps to expand dims. Added the support.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Unsqueeze op support
   - [ ] Added operator test.
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11097: fix nd.backward when out_grad is None

2018-05-30 Thread GitBox
szha commented on issue #11097: fix nd.backward when out_grad is None
URL: https://github.com/apache/incubator-mxnet/pull/11097#issuecomment-393362884
 
 
   What happens currently is 
https://github.com/apache/incubator-mxnet/blob/master/src/c_api/c_api_ndarray.cc#L352
 which matches https://en.cppreference.com/w/cpp/language/reinterpret_cast and 
then 
https://github.com/apache/incubator-mxnet/blob/master/src/imperative/imperative.cc#L389-L395


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #11097: fix nd.backward when out_grad is None

2018-05-30 Thread GitBox
szha commented on issue #11097: fix nd.backward when out_grad is None
URL: https://github.com/apache/incubator-mxnet/pull/11097#issuecomment-393362884
 
 
   What happens currently is 
https://github.com/apache/incubator-mxnet/blob/master/src/c_api/c_api_ndarray.cc#L352
 which matches https://en.cppreference.com/w/cpp/language/reinterpret_cast 
(explanation 4) and then 
https://github.com/apache/incubator-mxnet/blob/master/src/imperative/imperative.cc#L389-L395


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #9994: [MXNET-59] Tensorboard: Add histogram callback

2018-05-30 Thread GitBox
rahul003 commented on a change in pull request #9994: [MXNET-59] Tensorboard: 
Add histogram callback
URL: https://github.com/apache/incubator-mxnet/pull/9994#discussion_r189720574
 
 

 ##
 File path: python/mxnet/contrib/tensorboard.py
 ##
 @@ -71,3 +72,43 @@ def __call__(self, param):
 if self.prefix is not None:
 name = '%s-%s' % (self.prefix, name)
 self.summary_writer.add_scalar(name, value)
+
+def node_histogram_visualization(self, prefix=None, node_names=None, 
bins="auto"):
+"""Node histogram visualization in TensorBoard.
+This callback works almost same as `callback.module_checkpoint`,
+but write TensorBoard event file for visualization.
+For more usage, please refer https://github.com/dmlc/tensorboard
+
+Parameters
+--
+prefix : str
+Prefix for a metric name of `histograms` and `distributions` value.
+node_names : list of str, optional
+Name of nodes list you want to visualize.
+If set 'None', this callback visualize all nodes histogram and 
distributions.
+Default node_names = None.
+bins : str
+one of {'tensorflow','auto', 'fd', ...}, this determines how the 
bins are made.
+You can find other options in:
+
https://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram.html
+Default bins = 'auto'
+"""
+self.histogram_prefix = prefix
+self.node_names = node_names
+self.bins = bins
+
+# pylint: disable=unused-argument
+def _callback(iter_no, sym=None, arg=None, aux=None):
+"""Callback to log node histogram visualization in TensorBoard."""
+for k, v in arg.items():
+if self.node_names is None or k in self.node_names:
+if self.histogram_prefix is not None:
+name = '%s-%s' % (self.histogram_prefix, k)
+self.summary_writer.add_histogram(name, v, 
global_step=iter_no, bins=self.bins)
+for k, v in aux.items():
+if self.node_names is None or k in self.node_names:
+if self.histogram_prefix is not None:
+name = '%s-%s' % (self.histogram_prefix, k)
+self.summary_writer.add_histogram(name, v, 
global_step=iter_no, bins=self.bins)
 
 Review comment:
   This PR is still helpful to use Symbolic mode with mxboard. I'm trying to 
use it, however this name is used without being initialized. In case someone 
else is using it, you need to make that change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #10894: [MXNET-399] Elemwise_mul between dense and csr on CPU & GPU

2018-05-30 Thread GitBox
eric-haibin-lin closed pull request #10894: [MXNET-399] Elemwise_mul between 
dense and csr on CPU & GPU
URL: https://github.com/apache/incubator-mxnet/pull/10894
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/elemwise_binary_op-inl.h 
b/src/operator/tensor/elemwise_binary_op-inl.h
index c74f1f93603..911c369b3e6 100644
--- a/src/operator/tensor/elemwise_binary_op-inl.h
+++ b/src/operator/tensor/elemwise_binary_op-inl.h
@@ -495,6 +495,91 @@ void ElemwiseBinaryOp::DnsCsrDnsOp(mshadow::Stream *s,
   });
 }
 
+/*!
+ * \brief Kernel for performing elemwise op between dense and csr matrix
+ * \param iglobal thread id
+ * \param req  type of request
+ * \param out  output array
+ * \param dns_data data array of dense input
+ * \param csr_data data array of csr input
+ * \param csr_indices  indices array of csr input
+ * \param csr_indptr   indptr array of csr input
+ * \param num_rows number of rows of both inputs
+ * \param num_cols number of columns of both inputs
+ */
+template
+struct ElemwiseDnsCsrCsrKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out, DType* dns_data,
+  const DType* csr_data, const IType* 
csr_indices,
+  const CType* csr_indptr, const nnvm::dim_t 
num_rows,
+  const nnvm::dim_t num_cols) {
+if (i < num_rows) {
+  for (int j = csr_indptr[i]; j < csr_indptr[i+1]; ++j) {
+KERNEL_ASSIGN(out[j], req, reverse ?
+   OP::Map(dns_data[i * num_cols + 
csr_indices[j]], csr_data[j]) :
+   OP::Map(csr_data[j], dns_data[i * num_cols 
+ csr_indices[j]]));
+  }
+}
+  }
+};
+
+/*! \brief DNS -op- CSR binary operator for non-canonical NDArray */
+template
+void ElemwiseBinaryOp::DnsCsrCsrOp(const nnvm::NodeAttrs &attrs,
+   const OpContext &ctx,
+   const NDArray &dns,
+   const NDArray &csr,
+   const OpReqType req,
+   const NDArray &output,
+   const bool reverse) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  using namespace csr;
+  CHECK_EQ(dns.storage_type(), kDefaultStorage);
+  CHECK_EQ(csr.storage_type(), kCSRStorage);
+  CHECK_EQ(req, kWriteTo) << "elemwise(dns, csr) = csr only supports kWriteTo";
+  if (req == kNullOp) return;
+  const bool supported_op = std::is_same::value;
+  CHECK(supported_op == true) << "elemwise(dns, csr) = csr only supports mul";
+  const nnvm::dim_t num_csr_rows = csr.shape()[0];
+  const nnvm::dim_t num_csr_cols = csr.shape()[1];
+  const nnvm::dim_t nnz = csr.storage_shape()[0];
+  Stream *s = ctx.get_stream();
+
+  output.CheckAndAlloc({Shape1(num_csr_rows + 1), Shape1(nnz)});
+  if (csr.storage_initialized()) {
+TBlob csr_data = csr.data();
+TBlob csr_indices = csr.aux_data(kIdx);
+TBlob csr_indptr = csr.aux_data(kIndPtr);
+MSHADOW_SGL_DBL_TYPE_SWITCH(csr_data.type_flag_, DType, {
+  MSHADOW_IDX_TYPE_SWITCH(csr_indices.type_flag_, IType, {
+MSHADOW_IDX_TYPE_SWITCH(csr_indptr.type_flag_, CType, {
+  MXNET_ASSIGN_REQ_SWITCH(req, Req, {
+if (reverse) {
+  Kernel, xpu>::Launch(
+s, num_csr_rows, output.data().dptr(), 
dns.data().dptr(),
+csr_data.dptr(), csr_indices.dptr(), 
csr_indptr.dptr(),
+num_csr_rows, num_csr_cols);
+} else {
+  Kernel, xpu>::Launch(
+s, num_csr_rows, output.data().dptr(), 
dns.data().dptr(),
+csr_data.dptr(), csr_indices.dptr(), 
csr_indptr.dptr(),
+num_csr_rows, num_csr_cols);
+}
+Copy(output.aux_data(kIdx).FlatTo1D(),
+ csr.aux_data(kIdx).FlatTo1D(), s);
+Copy(output.aux_data(kIndPtr).FlatTo1D(),
+ csr.aux_data(kIndPtr).FlatTo1D(), s);
+  });
+});
+  });
+});
+  } else {
+FillZerosCsrImpl(s, output);
+  }
+}
+
 /*!
  * \brief Kernel for performing elemwise op between dense and rsp tensor
  * \param iglobal thread id
diff --git a/src/operator/tensor/elemwise_binary_op.cc 
b/src/operator/tensor/elemwise_binary_op.cc
index e8ba2fa7234..9ccbacc2f65 100644
--- a/src/operator/tensor/elemwise_binary_op.cc
+++ b/src/operator/tensor/elemwise_binary_op.cc
@@ -63,6 +63,11 @@ bool ElemwiseBinaryOp::BackwardUseInStorageType(const 
nnvm::NodeAttrs& attrs,
   const bool invalid_ctx = dev_mask != mshadow::cpu::kDevMask;
   const auto dispatch_ex = invalid_ctx ? DispatchM

[incubator-mxnet] branch master updated: [MXNET-399] Elemwise_mul between dense and csr on CPU & GPU (#10894)

2018-05-30 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9feecce  [MXNET-399] Elemwise_mul between dense and csr on CPU & GPU 
(#10894)
9feecce is described below

commit 9feeccecb4ab64461cfae0bd4e75dd4bcbd7c9d5
Author: Hao Jin 
AuthorDate: Wed May 30 17:33:42 2018 -0700

[MXNET-399] Elemwise_mul between dense and csr on CPU & GPU (#10894)

* support elemwise_mul between dns and csr

* address reviews and support for backward when ograd is dns
---
 src/operator/tensor/elemwise_binary_op-inl.h|  85 +
 src/operator/tensor/elemwise_binary_op.cc   |  21 
 src/operator/tensor/elemwise_binary_op.h| 121 +---
 src/operator/tensor/elemwise_binary_op_basic.cu |   4 +-
 tests/python/unittest/test_sparse_operator.py   |  14 ++-
 5 files changed, 210 insertions(+), 35 deletions(-)

diff --git a/src/operator/tensor/elemwise_binary_op-inl.h 
b/src/operator/tensor/elemwise_binary_op-inl.h
index c74f1f9..911c369 100644
--- a/src/operator/tensor/elemwise_binary_op-inl.h
+++ b/src/operator/tensor/elemwise_binary_op-inl.h
@@ -496,6 +496,91 @@ void ElemwiseBinaryOp::DnsCsrDnsOp(mshadow::Stream *s,
 }
 
 /*!
+ * \brief Kernel for performing elemwise op between dense and csr matrix
+ * \param iglobal thread id
+ * \param req  type of request
+ * \param out  output array
+ * \param dns_data data array of dense input
+ * \param csr_data data array of csr input
+ * \param csr_indices  indices array of csr input
+ * \param csr_indptr   indptr array of csr input
+ * \param num_rows number of rows of both inputs
+ * \param num_cols number of columns of both inputs
+ */
+template
+struct ElemwiseDnsCsrCsrKernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType* out, DType* dns_data,
+  const DType* csr_data, const IType* 
csr_indices,
+  const CType* csr_indptr, const nnvm::dim_t 
num_rows,
+  const nnvm::dim_t num_cols) {
+if (i < num_rows) {
+  for (int j = csr_indptr[i]; j < csr_indptr[i+1]; ++j) {
+KERNEL_ASSIGN(out[j], req, reverse ?
+   OP::Map(dns_data[i * num_cols + 
csr_indices[j]], csr_data[j]) :
+   OP::Map(csr_data[j], dns_data[i * num_cols 
+ csr_indices[j]]));
+  }
+}
+  }
+};
+
+/*! \brief DNS -op- CSR binary operator for non-canonical NDArray */
+template
+void ElemwiseBinaryOp::DnsCsrCsrOp(const nnvm::NodeAttrs &attrs,
+   const OpContext &ctx,
+   const NDArray &dns,
+   const NDArray &csr,
+   const OpReqType req,
+   const NDArray &output,
+   const bool reverse) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  using namespace csr;
+  CHECK_EQ(dns.storage_type(), kDefaultStorage);
+  CHECK_EQ(csr.storage_type(), kCSRStorage);
+  CHECK_EQ(req, kWriteTo) << "elemwise(dns, csr) = csr only supports kWriteTo";
+  if (req == kNullOp) return;
+  const bool supported_op = std::is_same::value;
+  CHECK(supported_op == true) << "elemwise(dns, csr) = csr only supports mul";
+  const nnvm::dim_t num_csr_rows = csr.shape()[0];
+  const nnvm::dim_t num_csr_cols = csr.shape()[1];
+  const nnvm::dim_t nnz = csr.storage_shape()[0];
+  Stream *s = ctx.get_stream();
+
+  output.CheckAndAlloc({Shape1(num_csr_rows + 1), Shape1(nnz)});
+  if (csr.storage_initialized()) {
+TBlob csr_data = csr.data();
+TBlob csr_indices = csr.aux_data(kIdx);
+TBlob csr_indptr = csr.aux_data(kIndPtr);
+MSHADOW_SGL_DBL_TYPE_SWITCH(csr_data.type_flag_, DType, {
+  MSHADOW_IDX_TYPE_SWITCH(csr_indices.type_flag_, IType, {
+MSHADOW_IDX_TYPE_SWITCH(csr_indptr.type_flag_, CType, {
+  MXNET_ASSIGN_REQ_SWITCH(req, Req, {
+if (reverse) {
+  Kernel, xpu>::Launch(
+s, num_csr_rows, output.data().dptr(), 
dns.data().dptr(),
+csr_data.dptr(), csr_indices.dptr(), 
csr_indptr.dptr(),
+num_csr_rows, num_csr_cols);
+} else {
+  Kernel, xpu>::Launch(
+s, num_csr_rows, output.data().dptr(), 
dns.data().dptr(),
+csr_data.dptr(), csr_indices.dptr(), 
csr_indptr.dptr(),
+num_csr_rows, num_csr_cols);
+}
+Copy(output.aux_data(kIdx).FlatTo1D(),
+ csr.aux_data(kIdx).FlatTo1D(), s);
+Copy(output.aux_data(kIndPtr).FlatTo1D(),
+ csr.aux_data(kIndPtr).FlatTo1D(), s);
+  });
+});
+  });
+});
+  } else {
+Fi

[incubator-mxnet] branch master updated: fix dot(csr.T, dns)=dns can't be called on cpu and gpu (#11087)

2018-05-30 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8be4b8e  fix dot(csr.T, dns)=dns can't be called on cpu and gpu 
(#11087)
8be4b8e is described below

commit 8be4b8ef9b30eb90aa3885f47be6691a2bbafa1b
Author: XiaotaoChen 
AuthorDate: Thu May 31 08:34:28 2018 +0800

fix dot(csr.T, dns)=dns can't be called on cpu and gpu (#11087)
---
 src/operator/tensor/dot-inl.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/operator/tensor/dot-inl.h b/src/operator/tensor/dot-inl.h
index 2c9a483..ffdb706 100644
--- a/src/operator/tensor/dot-inl.h
+++ b/src/operator/tensor/dot-inl.h
@@ -246,6 +246,10 @@ inline bool DotForwardInferStorageType(const 
nnvm::NodeAttrs& attrs,
 if (target_stype == kRowSparseStorage) {
   dispatched = storage_type_assign(&out_stype, kRowSparseStorage,
dispatch_mode, 
DispatchMode::kFComputeEx);
+// csr.T, rsp/dns -> dns
+} else if (target_stype == kDefaultStorage) {
+  dispatched = storage_type_assign(&out_stype, kDefaultStorage, 
dispatch_mode,
+   DispatchMode::kFComputeEx);
 }
   }
   if (!dispatched && lhs_stype == kCSRStorage && rhs_rsp_or_dns &&

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] eric-haibin-lin closed pull request #11087: fix dot(csr.T, dns)=dns can't be called on cpu and gpu

2018-05-30 Thread GitBox
eric-haibin-lin closed pull request #11087: fix dot(csr.T, dns)=dns can't be 
called on cpu and gpu
URL: https://github.com/apache/incubator-mxnet/pull/11087
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/dot-inl.h b/src/operator/tensor/dot-inl.h
index 2c9a483567f..ffdb706e5e3 100644
--- a/src/operator/tensor/dot-inl.h
+++ b/src/operator/tensor/dot-inl.h
@@ -246,6 +246,10 @@ inline bool DotForwardInferStorageType(const 
nnvm::NodeAttrs& attrs,
 if (target_stype == kRowSparseStorage) {
   dispatched = storage_type_assign(&out_stype, kRowSparseStorage,
dispatch_mode, 
DispatchMode::kFComputeEx);
+// csr.T, rsp/dns -> dns
+} else if (target_stype == kDefaultStorage) {
+  dispatched = storage_type_assign(&out_stype, kDefaultStorage, 
dispatch_mode,
+   DispatchMode::kFComputeEx);
 }
   }
   if (!dispatched && lhs_stype == kCSRStorage && rhs_rsp_or_dns &&


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9767: [feature request]arg_scope in gluon

2018-05-30 Thread GitBox
szha commented on issue #9767: [feature request]arg_scope in gluon
URL: 
https://github.com/apache/incubator-mxnet/issues/9767#issuecomment-393365086
 
 
   Seems doable with decorators and scope.
   
http://code.activestate.com/recipes/577382-keyword-argument-injection-with-python-decorators/
   
   With this solution, we'd need to decorate every constructor in Gluon.nn/rnn 
though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10817: Static memory allocation for cached_op

2018-05-30 Thread GitBox
piiswrong closed pull request #10817: Static memory allocation for cached_op
URL: https://github.com/apache/incubator-mxnet/pull/10817
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/include/mxnet/c_api.h b/include/mxnet/c_api.h
index be47c3c14fa..6b7cf4407ed 100644
--- a/include/mxnet/c_api.h
+++ b/include/mxnet/c_api.h
@@ -987,11 +987,6 @@ MXNET_DLL int MXCreateCachedOpEx(SymbolHandle handle,
  int num_flags,
  const char** keys,
  const char** vals,
- int num_inputs,
- const char** input_names,
- int num_params,
- const char** param_names,
- NDArrayHandle* params,
  CachedOpHandle *out);
 /*!
  * \brief free cached operator
diff --git a/include/mxnet/imperative.h b/include/mxnet/imperative.h
index 758ce851321..7ea60df3302 100644
--- a/include/mxnet/imperative.h
+++ b/include/mxnet/imperative.h
@@ -35,23 +35,6 @@
 #include "./ndarray.h"
 
 namespace mxnet {
-/*! \brief CachedOp Parameters */
-struct CachedOpConfig : public dmlc::Parameter {
-  uint32_t inline_limit;
-  uint32_t forward_bulk_size;
-  uint32_t backward_bulk_size;
-  DMLC_DECLARE_PARAMETER(CachedOpConfig) {
-DMLC_DECLARE_FIELD(inline_limit)
-.set_default(2)
-.describe("Maximum number of operators that can be inlined.");
-DMLC_DECLARE_FIELD(forward_bulk_size)
-.set_default(dmlc::GetEnv("MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN", 15))
-.describe("Segment size of bulk execution during forward pass.");
-DMLC_DECLARE_FIELD(backward_bulk_size)
-.set_default(dmlc::GetEnv("MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN", 15))
-.describe("Segment size of bulk execution during backward pass.");
-  }
-};
 /*! \brief runtime functions for NDArray */
 class Imperative {
  public:
@@ -94,67 +77,6 @@ class Imperative {
  && info.out_grads.size() == 1;
 }
   };
-  class CachedOp {
-   public:
-CachedOp(
-const nnvm::Symbol& sym,
-const std::vector >& flags,
-const std::vector arg_names,
-const std::unordered_map >& params);
-uint32_t num_inputs() {
-  return fwd_graph_.indexed_graph().input_nodes().size();
-}
-uint32_t num_outputs() {
-  return fwd_graph_.outputs.size();
-}
-uint32_t num_backward_inputs() {
-  return bwd_ograd_dep_.size() + bwd_in_dep_.size() + bwd_out_dep_.size();
-}
-std::vector& save_inputs() {
-  return save_inputs_;
-}
-std::vector& save_outputs() {
-  return save_outputs_;
-}
-const std::unordered_set& mutable_input_nodes() {
-  return fwd_graph_.indexed_graph().mutable_input_nodes();
-}
-nnvm::Graph GetForwardGraph(const bool recording,
-const std::vector& inputs);
-nnvm::Graph GetBackwardGraph(const OpStatePtr& state,
- const std::vector& reqs,
- const std::vector& inputs);
-std::vector Gradient(const nnvm::NodePtr& node,
-  const std::vector& 
ograds);
-void Forward(const std::shared_ptr& op_ptr,
- const std::vector& args,
- const std::vector& outputs);
-void Backward(const bool retain_graph,
-  const OpStatePtr& state,
-  const std::vector& inputs,
-  const std::vector& reqs,
-  const std::vector& outputs);
-
-   private:
-struct CachedOpState {
-  std::vector buff;
-  std::vector states;
-};
-std::mutex mutex_;
-CachedOpConfig config_;
-nnvm::Graph fwd_graph_;
-nnvm::Graph grad_graph_;
-nnvm::Graph full_graph_;
-std::unordered_map > params_;
-bool inlining_;
-std::vector ograd_entries_;
-std::vector curr_grad_req_;
-std::vector bwd_in_dep_, bwd_out_dep_, bwd_ograd_dep_;
-std::vector fwd_args_idx_;
-std::vector fwd_params_idx_;
-std::vector bwd_input_eid_;
-std::vector save_inputs_, save_outputs_;
-  };
   /*! \brief whether operator recording is on. */
   bool is_training() const {
 return is_train_;
@@ -222,15 +144,6 @@ class Imperative {
   uint32_t num_inputs, uint32_t num_outputs,
   std::vector *p_save_inputs,
   std::vector *p_save_outputs);
-  void RunGraph(
-  const bool retain_graph,
-  const nnvm::IndexedGraph& idx,
-  const std::vector arrays,
-  size_t node_start, size_t node_end,
-  std::vector&& array_reqs,
-  std::vector&& ref_count,
-  std::vector *p_states,
-  const DispatchModeVector& 

[incubator-mxnet-site] branch asf-site updated: Nightly build

2018-05-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 6e6f87c  Nightly build
6e6f87c is described below

commit 6e6f87cc0c991a43fdd5df25bfbcf20bc8624054
Author: mxnet-ci 
AuthorDate: Thu May 31 01:47:44 2018 +

Nightly build
---
 date.txt | 1 -
 1 file changed, 1 deletion(-)

diff --git a/date.txt b/date.txt
deleted file mode 100644
index 5d95e29..000
--- a/date.txt
+++ /dev/null
@@ -1 +0,0 @@
-Wed May 30 12:06:11 UTC 2018

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-05-30 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 6ef1ab4  Bump the publish timestamp.
6ef1ab4 is described below

commit 6ef1ab43b09fd51407240039671df31f876e81e2
Author: mxnet-ci 
AuthorDate: Thu May 31 02:12:21 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..eb1dbbd
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu May 31 02:12:21 UTC 2018

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-05-30 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r191974393
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,229 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) USE_OPENCV=0 USE_OPENMP=1 USE_MKLDNN=1 
USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
 
 Review comment:
   To use VS2017, flease follow this 
[link](https://github.com/apache/incubator-mxnet/blob/ecdff56170c4bb2b54af5ed765b7f826e4f95e26/docs/install/index.md)
 to modify VC++ and change the version of the Visual studio 2017 to v14.11 
before building. VS2015 is prefered. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >