[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

2020-01-15 Thread GitBox
ZhennanQin commented on a change in pull request #17265: Add bfloat16 
floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r367265169
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/base.h
 ##
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   We don't have enough background / knowledge to enable Bfloat16 on GPU side. 
So probably we can't make the change you proposed. Alternately, any code 
refactoring on GPU side is welcome. you may change this as you want in 
following PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

2020-01-15 Thread GitBox
ZhennanQin commented on a change in pull request #17265: Add bfloat16 
floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r367265169
 
 

 ##
 File path: 3rdparty/mshadow/mshadow/base.h
 ##
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   We don't have enough background / knowledge to enable Bfloat16 on GPU side. 
So probably we can't make the change you proposed. Alternately, any code 
refactoring on GPU side is welcome. you may change this in following PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-01-15 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 616bcd3  Bump the publish timestamp.
616bcd3 is described below

commit 616bcd34abfe59510f732728c56926e822f9f830
Author: mxnet-ci 
AuthorDate: Thu Jan 16 07:13:18 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..7c0a18a
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Jan 16 07:13:18 UTC 2020



[GitHub] [incubator-mxnet] mahxn0 commented on issue #14875: MXNet to ONNX export bug

2020-01-15 Thread GitBox
mahxn0 commented on issue #14875: MXNet to ONNX export bug
URL: 
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-575012459
 
 
   same problom, when used torch yolov32onnx.py, so easy to convert
   I will give up mxnet never look back


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom operator GPU support

2020-01-15 Thread GitBox
samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom 
operator GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17270#discussion_r367251787
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -720,8 +751,11 @@ int MXLoadLib(const char *path) {
 gradOp.set_attr("TIsLayerOpBackward", true, plevel);
 gradOp.set_attr("FStatefulComputeEx",
 fstateful_backward, plevel);
+gradOp.set_attr("FStatefulComputeEx",
 
 Review comment:
   I prefer approach 1, lets not do name mangling :D


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom operator GPU support

2020-01-15 Thread GitBox
samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom 
operator GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17270#discussion_r367251499
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -720,8 +751,11 @@ int MXLoadLib(const char *path) {
 gradOp.set_attr("TIsLayerOpBackward", true, plevel);
 gradOp.set_attr("FStatefulComputeEx",
 fstateful_backward, plevel);
+gradOp.set_attr("FStatefulComputeEx",
 
 Review comment:
   I think that I support approach 1. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom operator GPU support

2020-01-15 Thread GitBox
samskalicky commented on a change in pull request #17270: [WIP] Dynamic custom 
operator GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17270#discussion_r367251499
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -720,8 +751,11 @@ int MXLoadLib(const char *path) {
 gradOp.set_attr("TIsLayerOpBackward", true, plevel);
 gradOp.set_attr("FStatefulComputeEx",
 fstateful_backward, plevel);
+gradOp.set_attr("FStatefulComputeEx",
 
 Review comment:
   I think that I support approach 1. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (6b9a1da -> 04c3eec)

2020-01-15 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6b9a1da  Multi-tensor LAMB (#16893)
 add 04c3eec  grouping large vector tests based on their type and adding 
their nightly test function (#17306)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh |   10 +
 tests/nightly/test_large_vector.py | 2061 ++--
 2 files changed, 1028 insertions(+), 1043 deletions(-)



[incubator-mxnet] branch master updated (6b9a1da -> 04c3eec)

2020-01-15 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6b9a1da  Multi-tensor LAMB (#16893)
 add 04c3eec  grouping large vector tests based on their type and adding 
their nightly test function (#17306)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh |   10 +
 tests/nightly/test_large_vector.py | 2061 ++--
 2 files changed, 1028 insertions(+), 1043 deletions(-)



[GitHub] [incubator-mxnet] apeforest merged pull request #17306: grouping large vector tests based on their type and adding their nightly test function

2020-01-15 Thread GitBox
apeforest merged pull request #17306: grouping large vector tests based on 
their type and adding their nightly test function
URL: https://github.com/apache/incubator-mxnet/pull/17306
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hanke580 commented on a change in pull request #17323: [Numpy] Kron operator

2020-01-15 Thread GitBox
hanke580 commented on a change in pull request #17323: [Numpy] Kron operator
URL: https://github.com/apache/incubator-mxnet/pull/17323#discussion_r367234104
 
 

 ##
 File path: src/operator/numpy/np_kron-inl.h
 ##
 @@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_kron-inl.h
+ * \brief Function definition of matrix numpy-compatible kron operator
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_KRON_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_KRON_INL_H_
+
+#include 
+#include "np_tensordot_op-inl.h"
+#include "../mxnet_op.h"
+
+namespace mxnet {
+namespace op {
+
+template
+struct kron{
+  template
+  MSHADOW_XINLINE static void Map(index_t i, DType* out,
+  const DType* a, const DType* b,
+  mshadow::Shape ashape,
+  mshadow::Shape bshape,
+  mshadow::Shape oshape) {
+using namespace mxnet_op;
+
+auto k = unravel(i, oshape);
+Shape ia;
+Shape jb;
+for(int q = 0; q < ndim; q++){
+  ia[q] = int(k[q] / bshape[q]);
+  jb[q] = k[q] % bshape[q];
+}
+auto idx_a = ravel(ia, ashape);
+auto idx_b = ravel(jb, bshape);
+
+KERNEL_ASSIGN(out[i], req, a[idx_a] * b[idx_b]);
+  }
+};
+
+template
+struct kron_back_a{
+  template
+  MSHADOW_XINLINE static void Map(index_t i, DType* agrad,
+  const DType* b, const DType* ograd,
+  mshadow::Shape ashape,
+  mshadow::Shape bshape,
+  mshadow::Shape oshape) {
+using namespace mxnet_op;
+
+auto ia = unravel(i, ashape);
+Shape k;
+DType temp_agrad = 0;
+
+for(int idx_b = 0; idx_b < bshape.Size(); idx_b++){
+  auto jb = unravel(idx_b, bshape);
+  for(int q = 0;q < ndim; q++){
+k[q] = ia[q]*bshape[q] + jb[q];
+  }
+  auto idx_o = ravel(k, oshape);
+  temp_agrad += b[idx_b]*ograd[idx_o];
+}
+KERNEL_ASSIGN(agrad[i], req, temp_agrad);
+
+  }
+};
+
+template
+struct kron_back_b{
+  template
+  MSHADOW_XINLINE static void Map(index_t i, const DType* a,
+  DType* bgrad, const DType* ograd,
+  mshadow::Shape ashape,
+  mshadow::Shape bshape,
+  mshadow::Shape oshape) {
+using namespace mxnet_op;
+
+auto jb = unravel(i, bshape);
+Shape k;
+DType temp_bgrad = 0;
+
+for(int idx_a = 0; idx_a < ashape.Size(); idx_a++){
+  auto ia = unravel(idx_a, ashape);
+  for(int q = 0;q < ndim; q++){
+k[q] = ia[q] * bshape[q] + jb[q];
+  }
+  auto idx_o = ravel(k, oshape);
+  temp_bgrad += a[idx_a]*ograd[idx_o];
+}
+KERNEL_ASSIGN(bgrad[i], req, temp_bgrad);
+  }
+};
+
+template
+void KronOpForwardImpl(const OpContext& ctx,
+const std::vector& req,
+const TBlob& a,
+const TBlob& b,
+const TBlob& out
+){
+  using namespace mshadow;
+
+  const mxnet::TShape& ashape = a.shape_;
+  const mxnet::TShape& bshape = b.shape_;
+  const mxnet::TShape& oshape = out.shape_;
+  MXNET_NDIM_SWITCH(oshape.ndim(), ndim, {
+
+Shape ashape_;
+Shape bshape_;
+Shape oshape_;
+int temp = ashape.ndim()-bshape.ndim();
+int s_dim = temp>0?bshape.ndim():ashape.ndim();
+for (int i = 0; i 0) {
+  for (int i = s_dim; i(0, ctx, a, b, out, req[0]);
+Stream *s = ctx.get_stream();
+MSHADOW_TYPE_SWITCH(out.type_flag_, DType, {
+  MXNET_ASSIGN_REQ_SWITCH(req[0], req_type, {
+mxnet_op::Kernel, xpu>::Launch(
+  s, out.Size(), out.dptr(), a.dptr(), b.dptr(),
+  ashape_, bshape_, oshape_);
+  });
+});
+  });
+};
+
+template
+void KronOpBackwardImpl(const OpContext& ctx,
+const std::vector& req,
+const TBlob& a,
+const TBlob& b,
+const TBlob& ograd,
+const TBlob& agrad,
+const TBlob& bgrad){
+  const mxnet::TShape& ashape = a.s

[GitHub] [incubator-mxnet] liuzh91 commented on issue #17134: add batch_axis in validation handler

2020-01-15 Thread GitBox
liuzh91 commented on issue #17134: add batch_axis in validation handler
URL: https://github.com/apache/incubator-mxnet/pull/17134#issuecomment-574976440
 
 
   > @liuzh91 could you rebase your PR so CI will pass?
   
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17305: grouping large array tests based on type and updating nightly CI funtion

2020-01-15 Thread GitBox
ChaiBapchya commented on issue #17305: grouping large array tests based on type 
and updating nightly CI funtion
URL: https://github.com/apache/incubator-mxnet/pull/17305#issuecomment-574967580
 
 
   @access2rohit to prevent any issue coming ahead, can you please post the 
results of the tests? Thanks 
   @apeforest PR shouldnt have been merged without the test results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] liuzh91 commented on a change in pull request #17322: Add event handlers in validation handler

2020-01-15 Thread GitBox
liuzh91 commented on a change in pull request #17322: Add event handlers in 
validation handler
URL: https://github.com/apache/incubator-mxnet/pull/17322#discussion_r367217067
 
 

 ##
 File path: python/mxnet/gluon/contrib/estimator/event_handler.py
 ##
 @@ -181,14 +181,17 @@ class ValidationHandler(TrainBegin, BatchEnd, EpochEnd):
 Priority level of the ValidationHandler. Priority level is sorted in
 ascending order. The lower the number is, the higher priority level the
 handler is.
+event_handlers : EventHandler or list of EventHandlers
 
 Review comment:
   doc string added


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cyrusbehr opened a new issue #17338: Unable to build mxnet with USE_OPENMP=OFF

2020-01-15 Thread GitBox
cyrusbehr opened a new issue #17338: Unable to build mxnet with USE_OPENMP=OFF
URL: https://github.com/apache/incubator-mxnet/issues/17338
 
 
   I need to compile mxnet to run in single thread mode. From what I have 
found, using the environment variables `MXNET_ENGINE_TYPE=NaiveEngine` and 
`OMP_NUM_THREADS=1` and using `omp_set_num_threads(1);` in my c++ code gets the 
number of threads down to 2. To get the number of threads down to 1, I need to 
compile the library with `USE_OPENMP=OFF` (discussed more in 
[this](https://github.com/apache/incubator-mxnet/issues/15275) issue). In the 
past, this has worked successfully for me. However, when I tried doing this 
again, I get an error during compilation. I am using MKLDNN for the backend.
   
   Here are my steps:
   ```
   git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet
   cd mxnet
   mkdir build 
   cd build
   
   cmake -DUSE_CPP_PACKAGE=1 -DUSE_CUDA=0 -DUSE_MKL_IF_AVAILABLE=1 
-DUSE_OPENCV=0 -DUSE_LAPACK=0 -DUSE_OPENMP=0 \
 -DMKL_INCLUDE_DIR=/opt/intel/compilers_and_libraries/linux/mkl/include 
-DMKL_RT_LIBRARY=/opt/intel/compilers_and_libraries/linux/mkl/lib/intel64/libmkl_rt.so
 ..
   
   make -j16
   ```
   
   Here is the cmake output:
   ```
   -- The C compiler identification is GNU 7.4.0
   -- The CXX compiler identification is GNU 7.4.0
   -- Check for working C compiler: /usr/bin/cc
   -- Check for working C compiler: /usr/bin/cc -- works
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Check for working CXX compiler: /usr/bin/c++
   -- Check for working CXX compiler: /usr/bin/c++ -- works
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- CMAKE_CROSSCOMPILING FALSE
   -- CMAKE_HOST_SYSTEM_PROCESSOR x86_64
   -- CMAKE_SYSTEM_PROCESSOR x86_64
   -- CMAKE_SYSTEM_NAME Linux
   -- CMake version '3.13.3' using generator 'Unix Makefiles'
   -- Performing Test SUPPORT_CXX11
   -- Performing Test SUPPORT_CXX11 - Success
   -- Performing Test SUPPORT_CXX0X
   -- Performing Test SUPPORT_CXX0X - Success
   -- Performing Test SUPPORT_MSSE3
   -- Performing Test SUPPORT_MSSE3 - Success
   -- Performing Test SUPPORT_MSSE2
   -- Performing Test SUPPORT_MSSE2 - Success
   -- Determining F16C support
   -- Performing Test COMPILER_SUPPORT_MF16C
   m-- Performing Test COMPILER_SUPPORT_MF16C - Success
   -- F16C enabled
   -- CMAKE_BUILD_TYPE is unset, defaulting to Release
   -- MKL-DNN compat: set DNNL_BUILD_EXAMPLES to MKLDNN_BUILD_EXAMPLES with 
value `OFF`
   -- MKL-DNN compat: set DNNL_BUILD_TESTS to MKLDNN_BUILD_TESTS with value 
`OFF`
   -- MKL-DNN compat: set DNNL_ENABLE_JIT_PROFILING to 
MKLDNN_ENABLE_JIT_PROFILING with value `OFF`
   -- MKL-DNN compat: set DNNL_LIBRARY_TYPE to MKLDNN_LIBRARY_TYPE with value 
`STATIC`
   -- MKL-DNN compat: set DNNL_ARCH_OPT_FLAGS to MKLDNN_ARCH_OPT_FLAGS with 
value ``
   -- Looking for pthread.h
   a-- Looking for pthread.h - found
   -- Looking for pthread_create
   ke-- Looking for pthread_create - not found
   -- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
   -- Looking for pthread_create in pthread
   --- Looking for pthread_create in pthread - found
   -- Found Threads: TRUE  
   j-- Found OpenMP_C: -fopenmp (found version "4.5") 
   -- Found OpenMP_CXX: -fopenmp (found version "4.5") 
   -- Found OpenMP: TRUE (found version "4.5")  
   -- GPU support is disabled
   -- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE) 
   -- Found Git: /usr/bin/git (found version "2.17.1") 
   -- Intel(R) VTune(TM) Amplifier JIT profiling disabled
   -- Found MKL: /opt/intel/compilers_and_libraries/linux/mkl/include  
   -- Found MKL (include: /opt/intel/compilers_and_libraries/linux/mkl/include, 
lib: /opt/intel/compilers_and_libraries/linux/mkl/lib/intel64/libmkl_rt.so
   -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) 
   -- Could NOT find Jemalloc (missing: JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR) 
   -- OpenCV Disabled
   -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE) 
   -- Could NOT find Jemalloc (missing: JEMALLOC_LIBRARY JEMALLOC_INCLUDE_DIR) 
   CMake Warning at 3rdparty/googletest/googletest/CMakeLists.txt:47 (project):
 VERSION keyword not followed by a value or was followed by a value that
 expanded to nothing.
   
   
   -- Found PythonInterp: /usr/bin/python (found version "2.7.17") 
   -- Found GTest: gtest  
   -- Looking for clock_gettime in rt
   -- Looking for clock_gettime in rt - found
   -- Looking for fopen64
   -- Looking for fopen64 - not found
   -- Looking for C++ include cxxabi.h
   -- Looking for C++ include cxxabi.h - found
   -- Looking for nanosleep
   -- Looking for nanosleep - found
   -- Looking for backtrace
   -- Looking for backtrace - found
   -

[GitHub] [incubator-mxnet] ResearchingDexter commented on issue #15647: Segmentation fault: 11

2020-01-15 Thread GitBox
ResearchingDexter commented on issue #15647: Segmentation fault: 11
URL: 
https://github.com/apache/incubator-mxnet/issues/15647#issuecomment-574960028
 
 
   It didn't work that uninstall and reinstall mxnet. I reinstall mxnet many 
times, please help me. @leezu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] shawnbrar opened a new issue #17337: MXNET Installation for R-Package are wrong

2020-01-15 Thread GitBox
shawnbrar opened a new issue #17337: MXNET Installation for R-Package are wrong
URL: https://github.com/apache/incubator-mxnet/issues/17337
 
 
   The official installation instructions of Mxnet for R on Ubuntu are wrong. 
Under the Install the MXNet Package for R heading it says that "source root 
directory to build the MXNet Perl package:". Why is that so?
   ![Screenshot_2020-01-16-08-08-26-830_com brave 
browser](https://user-images.githubusercontent.com/59639827/72488616-72498900-3837-11ea-9b80-c2a05e89e11c.jpg)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (7b349dd -> 6b9a1da)

2020-01-15 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7b349dd  grouping large array tests based on type and updating nightly 
CI function (#17305)
 add 6b9a1da  Multi-tensor LAMB (#16893)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/contrib.py |  76 +++
 python/mxnet/optimizer/optimizer.py | 121 ---
 python/mxnet/test_utils.py  |  77 ---
 src/operator/contrib/multi_lamb-inl.h   | 359 
 src/operator/contrib/multi_lamb.cc  | 251 ++
 src/operator/contrib/multi_lamb.cu  | 261 +++
 src/operator/contrib/multi_sum_sq-inl.h |   4 +
 src/operator/contrib/multi_sum_sq.cc|   6 +
 src/operator/contrib/multi_sum_sq.cu|  29 ++-
 tests/python/unittest/test_optimizer.py |  46 +++-
 10 files changed, 1157 insertions(+), 73 deletions(-)
 mode change 100644 => 100755 python/mxnet/optimizer/optimizer.py
 mode change 100644 => 100755 python/mxnet/test_utils.py
 create mode 100644 src/operator/contrib/multi_lamb-inl.h
 create mode 100644 src/operator/contrib/multi_lamb.cc
 create mode 100644 src/operator/contrib/multi_lamb.cu
 mode change 100644 => 100755 tests/python/unittest/test_optimizer.py



[incubator-mxnet] branch master updated (7b349dd -> 6b9a1da)

2020-01-15 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7b349dd  grouping large array tests based on type and updating nightly 
CI function (#17305)
 add 6b9a1da  Multi-tensor LAMB (#16893)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/contrib.py |  76 +++
 python/mxnet/optimizer/optimizer.py | 121 ---
 python/mxnet/test_utils.py  |  77 ---
 src/operator/contrib/multi_lamb-inl.h   | 359 
 src/operator/contrib/multi_lamb.cc  | 251 ++
 src/operator/contrib/multi_lamb.cu  | 261 +++
 src/operator/contrib/multi_sum_sq-inl.h |   4 +
 src/operator/contrib/multi_sum_sq.cc|   6 +
 src/operator/contrib/multi_sum_sq.cu|  29 ++-
 tests/python/unittest/test_optimizer.py |  46 +++-
 10 files changed, 1157 insertions(+), 73 deletions(-)
 mode change 100644 => 100755 python/mxnet/optimizer/optimizer.py
 mode change 100644 => 100755 python/mxnet/test_utils.py
 create mode 100644 src/operator/contrib/multi_lamb-inl.h
 create mode 100644 src/operator/contrib/multi_lamb.cc
 create mode 100644 src/operator/contrib/multi_lamb.cu
 mode change 100644 => 100755 tests/python/unittest/test_optimizer.py



[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17297: Fix NCCL Cmake autodetect issue

2020-01-15 Thread GitBox
ChaiBapchya commented on a change in pull request #17297: Fix NCCL Cmake 
autodetect issue
URL: https://github.com/apache/incubator-mxnet/pull/17297#discussion_r367201643
 
 

 ##
 File path: cmake/Modules/FindNCCL.cmake
 ##
 @@ -33,6 +33,23 @@
 
 set(NCCL_ROOT_DIR "" CACHE PATH "Folder contains NVIDIA NCCL")
 
+# first check in the /usr/local/cuda before other paths
 
 Review comment:
   I'm not sure why it breaks the assumption. Can you help me rephrase the 
comment? I can remove "first" if that's what confuses the users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #16893: Multi-tensor LAMB

2020-01-15 Thread GitBox
eric-haibin-lin merged pull request #16893: Multi-tensor LAMB
URL: https://github.com/apache/incubator-mxnet/pull/16893
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16893: Multi-tensor LAMB

2020-01-15 Thread GitBox
eric-haibin-lin commented on issue #16893: Multi-tensor LAMB
URL: https://github.com/apache/incubator-mxnet/pull/16893#issuecomment-574949055
 
 
   Thank you @MoisesHer for addressing all the review comments 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #17231: cannot quantization example

2020-01-15 Thread GitBox
wuxun-zhang commented on issue #17231: cannot quantization example
URL: 
https://github.com/apache/incubator-mxnet/issues/17231#issuecomment-574948869
 
 
   @zhhoper May I know your exact command to build MXNet from source? And your 
complete benchamrk commands? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17270: [WIP] Dynamic custom operator GPU support

2020-01-15 Thread GitBox
eric-haibin-lin commented on a change in pull request #17270: [WIP] Dynamic 
custom operator GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17270#discussion_r367199659
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -720,8 +751,11 @@ int MXLoadLib(const char *path) {
 gradOp.set_attr("TIsLayerOpBackward", true, plevel);
 gradOp.set_attr("FStatefulComputeEx",
 fstateful_backward, plevel);
+gradOp.set_attr("FStatefulComputeEx",
 
 Review comment:
   Are you in favor of approach 1 or approach 2? They have different indication 
for the recommended nnvm_registration API for custom ops


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #17335: Excessive GPU memory usage with dynamic shape input using Gluon interface

2020-01-15 Thread GitBox
ptrendx commented on issue #17335: Excessive GPU memory usage with dynamic 
shape input using Gluon interface
URL: 
https://github.com/apache/incubator-mxnet/issues/17335#issuecomment-574946066
 
 
   @DickJC123 Could you take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574946070
 
 
   This fixes https://github.com/apache/incubator-mxnet/issues/15492


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574945806
 
 
   https://github.com/apache/incubator-mxnet/pull/17031


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574945865
 
 
   @mxnet-label-bot add [breaking]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574945507
 
 
   ```
   piotr@34-222-129-72:0:~/mxnet (cmake_cuda_compiler)+$ lsb_release -a
   No LSB modules are available.
   Distributor ID: Ubuntu
   Description:Ubuntu 18.04.3 LTS
   Release:18.04
   Codename:   bionic
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574945274
 
 
   ```
   --Python Info--
   ('Version  :', '2.7.17')
   ('Compiler :', 'GCC 7.4.0')
   ('Build:', ('default', 'Nov  7 2019 10:07:09'))
   ('Arch :', ('64bit', ''))
   Pip Info---
   No corresponding pip install for current python.
   --MXNet Info---
   No MXNet installed.
   --System Info--
   ('Platform :', 'Linux-4.15.0-1054-aws-x86_64-with-Ubuntu-18.04-bionic')
   ('system   :', 'Linux')
   ('node :', '34-222-129-72')
   ('release  :', '4.15.0-1054-aws')
   ('version  :', '#56-Ubuntu SMP Thu Nov 7 16:15:59 UTC 2019')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'x86_64')
   Architecture:x86_64
   CPU op-mode(s):  32-bit, 64-bit
   Byte Order:  Little Endian
   CPU(s):  8
   On-line CPU(s) list: 0-7
   Thread(s) per core:  2
   Core(s) per socket:  4
   Socket(s):   1
   NUMA node(s):1
   Vendor ID:   GenuineIntel
   CPU family:  6
   Model:   79
   Model name:  Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
   Stepping:1
   CPU MHz: 1455.803
   CPU max MHz: 3000.
   CPU min MHz: 1200.
   BogoMIPS:4600.12
   Hypervisor vendor:   Xen
   Virtualization type: full
   L1d cache:   32K
   L1i cache:   32K
   L2 cache:256K
   L3 cache:46080K
   NUMA node0 CPU(s):   0-7
   Flags:   fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm 
constant_tsc rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq 
ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes 
xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault 
invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx 
xsaveopt
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0029 
sec, LOAD: 0.4997 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0168 sec, LOAD: 
0.3225 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0236 sec, LOAD: 0.1133 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0100 sec, 
LOAD: 0.0518 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1954 sec, LOAD: 
0.2625 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.3784 sec, LOAD: 
0.1460 sec.
   --Environment--
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #17293: [Build] Add a reasonable default for CMAKE_CUDA_COMPILER in *nix

2020-01-15 Thread GitBox
larroy commented on issue #17293: [Build] Add a reasonable default for 
CMAKE_CUDA_COMPILER in *nix
URL: https://github.com/apache/incubator-mxnet/pull/17293#issuecomment-574945020
 
 
   We never had to do such a thing. This is happening due to CMake changes. I 
applied your suggestion. I would suggest to apply my proposed patch which makes 
it smoother in 99% of the cases for users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cyrusbehr edited a comment on issue #15275: How to run mxnet(C++) in single-thread mode?

2020-01-15 Thread GitBox
cyrusbehr edited a comment on issue #15275: How to run mxnet(C++) in 
single-thread mode?
URL: 
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-574939560
 
 
   @igor-byel I experienced the same issue you are facing, and here is my fix. 
The issue is likely due to the way you are using the `putenv` function. From 
the linux manual: 
   
   >  The putenv() function adds or changes the value of environment
  variables.  The argument string is of the form name=value.  If name
  does not already exist in the environment, then string is added to
  the environment.  If name does exist, then the value of name in the
  environment is changed to value.  **The string pointed to by string
  becomes part of the environment, so altering the string changes the
  environment.**
   
   In your above implementation,  your `mxnetConfig1` and `mxnetConfig2` 
variables are no longer defined once they go out of scope (which is at the end 
of the `getImplementation` function). You therefore have a dangling pointer in 
your environment. Make `mxnetConfig1` `mxnetConfig2` global or member variables 
for the changes to persist. 
   
   For more info, refer to [this](https://stackoverflow.com/q/57351676/4943329) 
stack overflow question.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] cyrusbehr commented on issue #15275: How to run mxnet(C++) in single-thread mode?

2020-01-15 Thread GitBox
cyrusbehr commented on issue #15275: How to run mxnet(C++) in single-thread 
mode?
URL: 
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-574939560
 
 
   @igor-byel I experienced the same issue you are facing, and here is my fix. 
The issue is likely due to the way you are using the `putenv` function. From 
the linux manual: 
   
   >  The putenv() function adds or changes the value of environment
  variables.  The argument string is of the form name=value.  If name
  does not already exist in the environment, then string is added to
  the environment.  If name does exist, then the value of name in the
  environment is changed to value.  **The string pointed to by string
  becomes part of the environment, so altering the string changes the
  environment.**
   
   In your above implementation,  your `mxnetConfig1` and `mxnetConfig2` 
variables are no longer defined once they go out of scope (which is at the end 
of the `getImplementation` function). The strings are therefore removed from 
the environment. Make `mxnetConfig1` `mxnetConfig2` global or member variables 
for the changes to persist. 
   
   For more info, refer to [this](https://stackoverflow.com/q/57351676/4943329) 
stack overflow question.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new pull request #17336: [BUGFIX] fix model zoo parallel download

2020-01-15 Thread GitBox
eric-haibin-lin opened a new pull request #17336: [BUGFIX] fix model zoo 
parallel download
URL: https://github.com/apache/incubator-mxnet/pull/17336
 
 
   ## Description ##
   Fixes https://github.com/apache/incubator-mxnet/issues/17332 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16831: [CI] Python2: CPU - hangs after test_create_np_param

2020-01-15 Thread GitBox
DickJC123 commented on issue #16831: [CI] Python2: CPU - hangs after 
test_create_np_param
URL: 
https://github.com/apache/incubator-mxnet/issues/16831#issuecomment-574934466
 
 
   The hang I reported in this old issue has occurred again in the exact same 
place, which I inferred to be in test_np_resize() because that is the next test 
to run after test_np_reshape():
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17312/1/pipeline/356
   
   A retry run I launched also appears to be hung on the test that follows 
test_np_empty(), namely test_np_empty_like():
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17312/2/pipeline/372
   
   I'm not sure how the problem is related to log file length.  The lengths of 
the failing log files are similar, but shorter than a passing run:
   ```
   
   307812hang1_log.txt
   311103hang2_log.txt
   416848passes_log.txt
   ```
   @reminisce @haojin2 @ptrendx 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-01-15 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a91d89d  Bump the publish timestamp.
a91d89d is described below

commit a91d89db4f32163e76b8d6e0cd740a6f6e393c3c
Author: mxnet-ci 
AuthorDate: Thu Jan 16 01:03:40 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..0289e1b
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Jan 16 01:03:40 UTC 2020



[GitHub] [incubator-mxnet] blchu commented on a change in pull request #17311: Added beamsearch_set_finished Operator

2020-01-15 Thread GitBox
blchu commented on a change in pull request #17311: Added 
beamsearch_set_finished Operator
URL: https://github.com/apache/incubator-mxnet/pull/17311#discussion_r367184172
 
 

 ##
 File path: src/operator/contrib/beamsearch_set_finished-inl.h
 ##
 @@ -0,0 +1,148 @@
+#include 
+
+#include "../operator_common.h"
+namespace mxnet {
+namespace op {
+
+namespace beamsearch_set_finished {
+enum BeamsearchSetFinishedInputs {kDist, kScores, kFin, kOverMax};
+enum BeamsearchSetFinishedOutputs {kOut};
+}
+
+
+//template
+struct beamsearch_set_finished_forward {
+template
+MSHADOW_XINLINE static void Map(int i, DType* out_data, const DType* 
in_data,
+const DType* scores, const IType* fin, 
const IType* over_max,
+const DType mask_val, const int score_idx, 
const int eos_idx,
+int V) {
+int j = i / V;
+int k = i % V;
+bool f = static_cast(fin[j]);
+bool o = static_cast(over_max[j]);
+bool s = k == score_idx;
+bool e = k == eos_idx;
+bool input = !f && (!o || e);
+bool score = f && s;
+//bool mask = !(input || score);
+//out_data[i] = (input * in_data[i]) + (score * scores[j]) + (mask * 
mask_val);
+if (input) out_data[i] = in_data[i];
+else if (score) out_data[i] = scores[j];
+else out_data[i] = mask_val;
+}
+};
+
+struct BeamsearchSetFinishedParam : public 
dmlc::Parameter {
+int score_idx;
+int eos_idx;
+float mask_val;
+DMLC_DECLARE_PARAMETER(BeamsearchSetFinishedParam) {
+DMLC_DECLARE_FIELD(score_idx)
+.set_default(0)
+.describe("Index to set the score of finished beams.");
+DMLC_DECLARE_FIELD(eos_idx)
+.describe("Index of the EOS token.");
+DMLC_DECLARE_FIELD(mask_val)
+.set_default(std::numeric_limits::lowest())
+.describe("Padding value used to mask out unwanted tokens in 
beams.");
+}
+};
+
+inline bool BeamsearchSetFinishedShape(const nnvm::NodeAttrs& attrs,
+   mxnet::ShapeVector* in_attrs,
+   mxnet::ShapeVector* out_attrs) {
+const BeamsearchSetFinishedParam& param = 
nnvm::get(attrs.parsed);
+CHECK_EQ(in_attrs->size(), 4U);
+CHECK_EQ(out_attrs->size(), 1U);
+
+auto dist = in_attrs->at(beamsearch_set_finished::kDist);
+auto scores = in_attrs->at(beamsearch_set_finished::kScores);
+auto fin = in_attrs->at(beamsearch_set_finished::kFin);
+auto over_max = in_attrs->at(beamsearch_set_finished::kOverMax);
+CHECK_EQ(dist.ndim(), 2U);
+CHECK_EQ(scores.ndim(), 2U);
+CHECK_EQ(fin.ndim(), 1U);
+CHECK_EQ(over_max.ndim(), 1U);
+
+CHECK_EQ(dist[0], scores[0]);
+CHECK_EQ(dist[0], fin[0]);
+CHECK_EQ(dist[0], over_max[0]);
+CHECK_EQ(scores[1], 1);
+
+mxnet::TShape score_shape(dist.ndim(), -1);
+score_shape[0] = dist[0];
+score_shape[1] = 1;
+
+mxnet::TShape bool_shape(dist.ndim() - 1, -1);
+bool_shape[0] = dist[0];
+
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, dist);
+SHAPE_ASSIGN_CHECK(*in_attrs, 0, 
out_attrs->at(beamsearch_set_finished::kOut));
+SHAPE_ASSIGN_CHECK(*in_attrs, 1, score_shape);
+SHAPE_ASSIGN_CHECK(*in_attrs, 2, bool_shape);
+SHAPE_ASSIGN_CHECK(*in_attrs, 3, bool_shape);
+
+return true;
+}
+
+inline bool BeamsearchSetFinishedType(const nnvm::NodeAttrs& attrs,
+  std::vector* in_attrs,
+  std::vector* out_attrs) {
+CHECK_EQ(in_attrs->size(), 4U);
+CHECK_EQ(out_attrs->size(), 1U);
+
+TYPE_ASSIGN_CHECK(*out_attrs, 0, (*in_attrs)[0]);
+TYPE_ASSIGN_CHECK(*in_attrs, 0, (*out_attrs)[0]);
+TYPE_ASSIGN_CHECK(*in_attrs, 1, (*out_attrs)[0]);
+TYPE_ASSIGN_CHECK(*in_attrs, 2, mshadow::kInt32);
+TYPE_ASSIGN_CHECK(*in_attrs, 3, mshadow::kInt32);
+return (*in_attrs)[0] != -1 && (*in_attrs)[1] != -1;
+}
+
+template
+void NoopGrad(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+LOG(FATAL) << "This operator should only be used for inference";
+}
+
+template
+void BeamsearchSetFinishedForward(const nnvm::NodeAttrs& attrs,
+  const OpContext& ctx,
+  const std::vector& inputs,
+  const std::vector& req,
+  const std::vector& outputs) {
+if (req[beamsearch_set_finished::kOut] == mxnet::kNullOp) return;
+const BeamsearchSetFinishedParam& param = 
nnvm::get(attrs.parsed);
+CHECK_EQ(inputs.size(), 4U);
+CHECK_EQ(outputs.size(), 1U);
+CHECK_EQ(req.size(), 1U);
+
+const mxnet::TShape& out_shape = 
outputs[beamsearch_set_finished::k

[GitHub] [incubator-mxnet] ptrendx commented on issue #15547: Pybind11 license issue from onnx tensorrt

2020-01-15 Thread GitBox
ptrendx commented on issue #15547: Pybind11 license issue from onnx tensorrt
URL: 
https://github.com/apache/incubator-mxnet/issues/15547#issuecomment-574929311
 
 
   The latest ONNX-TensorRT (v7.0) contains the onnx commit that has the right 
LICENSE in pyBind. However, switching to it would first of all require some 
changes to the way it is called and second of all require matching TensorRT 7, 
so I would much prefer to do this in 1.7 release and not as the last thing in 
1.6.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anjishnu commented on issue #17298: [MXNET-1438] Adding SDML loss function

2020-01-15 Thread GitBox
anjishnu commented on issue #17298: [MXNET-1438] Adding SDML loss function
URL: https://github.com/apache/incubator-mxnet/pull/17298#issuecomment-574927315
 
 
   Verifying that mention of this PR in #17292 is a typo - author meant to 
refer to #17208 - we are not breaking horovod


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #14728: [MXNET-1386] fix for shape mismatch

2020-01-15 Thread GitBox
ZhennanQin commented on issue #14728: [MXNET-1386] fix for shape mismatch
URL: https://github.com/apache/incubator-mxnet/pull/14728#issuecomment-574926950
 
 
   @ChaiBapchya `test_subgraph.test_pos_conv_add2` doesn't failed in master, 
but failed with this PR.  What I mean is, 
https://github.com/apache/incubator-mxnet/pull/15518 can resolve same 
problem(shape mismatch) without introducing test failed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default 
dtype
URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r367181864
 
 

 ##
 File path: src/operator/tensor/init_op.h
 ##
 @@ -44,6 +44,19 @@
 namespace mxnet {
 namespace op {
 
+inline int GetDefaultDtype() {
 
 Review comment:
   & maybe you can move this to `src/common/utils.h`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default dtype

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17283: [NumPy]Set numpy default 
dtype
URL: https://github.com/apache/incubator-mxnet/pull/17283#discussion_r367181706
 
 

 ##
 File path: src/operator/tensor/init_op.h
 ##
 @@ -44,6 +44,19 @@
 namespace mxnet {
 namespace op {
 
+inline int GetDefaultDtype() {
 
 Review comment:
   better use `DType`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham closed issue #16492: Broken Links - Python Tutorial | Getting Started

2020-01-15 Thread GitBox
aaronmarkham closed issue #16492: Broken Links - Python Tutorial | Getting 
Started 
URL: https://github.com/apache/incubator-mxnet/issues/16492
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham closed issue #16509: autograd tutorial is missing its images

2020-01-15 Thread GitBox
aaronmarkham closed issue #16509: autograd tutorial is missing its images
URL: https://github.com/apache/incubator-mxnet/issues/16509
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on a change in pull request #17322: Add event handlers in validation handler

2020-01-15 Thread GitBox
roywei commented on a change in pull request #17322: Add event handlers in 
validation handler
URL: https://github.com/apache/incubator-mxnet/pull/17322#discussion_r367176134
 
 

 ##
 File path: python/mxnet/gluon/contrib/estimator/event_handler.py
 ##
 @@ -181,14 +181,17 @@ class ValidationHandler(TrainBegin, BatchEnd, EpochEnd):
 Priority level of the ValidationHandler. Priority level is sorted in
 ascending order. The lower the number is, the higher priority level the
 handler is.
+event_handlers : EventHandler or list of EventHandlers
 
 Review comment:
   Could you add more detailed doc? Something like " EventHandler or list of 
EventHandlers to be used by the `eval_fn`". Otherwise, it will be confusing for 
users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op
URL: https://github.com/apache/incubator-mxnet/pull/17279#discussion_r367175367
 
 

 ##
 File path: src/operator/numpy/linalg/np_pinv.cc
 ##
 @@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_pinv.cc
+ * \brief CPU implementation of the PINV Operator
+ */
+
+#include "./np_pinv-inl.h"
+
+namespace mxnet {
+namespace op {
+
+bool PinvOpShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
+ mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const mxnet::TShape& a_shape = (*in_attrs)[0];
+  const mxnet::TShape& rcond_shape = (*in_attrs)[1];
+  const mxnet::TShape& pinv_shape = (*out_attrs)[0];
+  const int a_ndim = a_shape.ndim();
+
+  if (shape_is_known(a_shape)) {
+// Forward shape inference.
+CHECK_GE(a_ndim, 2)
+  << "Array must be at least two-dimensional";
 
 Review comment:
   Probably no need for line wrap here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op
URL: https://github.com/apache/incubator-mxnet/pull/17279#discussion_r367175459
 
 

 ##
 File path: src/operator/numpy/linalg/np_pinv.cc
 ##
 @@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_pinv.cc
+ * \brief CPU implementation of the PINV Operator
+ */
+
+#include "./np_pinv-inl.h"
+
+namespace mxnet {
+namespace op {
+
+bool PinvOpShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
+ mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const mxnet::TShape& a_shape = (*in_attrs)[0];
+  const mxnet::TShape& rcond_shape = (*in_attrs)[1];
+  const mxnet::TShape& pinv_shape = (*out_attrs)[0];
+  const int a_ndim = a_shape.ndim();
+
+  if (shape_is_known(a_shape)) {
+// Forward shape inference.
+CHECK_GE(a_ndim, 2)
+  << "Array must be at least two-dimensional";
+// Calculte pinv shape.
+std::vector pinv_shape_vec(a_ndim, -1);
+for (int i = 0; i < a_ndim - 2; ++i) {
+  pinv_shape_vec[i] = a_shape[i];
+}
+pinv_shape_vec[a_ndim - 2] = a_shape[a_ndim - 1];
+pinv_shape_vec[a_ndim - 1] = a_shape[a_ndim - 2];
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(pinv_shape_vec.begin(), 
pinv_shape_vec.end()));
+// Check rcond shape.
+GetOrCheckCutoffAndLargeShape(attrs, a_shape, rcond_shape, nullptr, 
nullptr);
+  } else {
+// Backward shape inference.
+if (shape_is_known(pinv_shape)) {
+  const int pinv_ndim = pinv_shape.ndim();
+  CHECK_GE(pinv_ndim, 2)
+<< "Array must be at least two-dimensional";
+  // Calculte 'a' shape.
+  std::vector a_shape_vec(pinv_ndim, -1);
+  for (int i = 0; i < pinv_ndim - 2; ++i) {
+a_shape_vec[i] = pinv_shape[i];
+  }
+  a_shape_vec[pinv_ndim - 2] = pinv_shape[pinv_ndim - 1];
+  a_shape_vec[pinv_ndim - 1] = pinv_shape[pinv_ndim - 2];
+  SHAPE_ASSIGN_CHECK(*in_attrs, 0, mxnet::TShape(a_shape_vec.begin(), 
a_shape_vec.end()));
+  // Check rcond shape.
+  GetOrCheckCutoffAndLargeShape(attrs, (*in_attrs)[0], rcond_shape, 
nullptr, nullptr);
+}
+  }
+  return shape_is_known(*in_attrs) && shape_is_known(*out_attrs);
+}
+
+inline bool PinvOpType(const nnvm::NodeAttrs& attrs,
+   std::vector* in_attrs,
+   std::vector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  int a_type = in_attrs->at(0);
+  int rcond_type = in_attrs->at(1);
+  // unsupport float16
+  CHECK_NE(a_type, mshadow::kFloat16)
+<< "array type float16 is unsupported in linalg.";
+  CHECK(rcond_type == mshadow::kFloat32 || rcond_type == mshadow::kFloat64)
+<< "rcond type should be float32 or float64.";
+  if (mshadow::kFloat32 == a_type) {
+TYPE_ASSIGN_CHECK(*out_attrs, 0, in_attrs->at(0));
+  } else {
+TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kFloat64);
+  }
+  return out_attrs->at(0) != -1;
+}
+
+DMLC_REGISTER_PARAMETER(PinvParam);
+
+NNVM_REGISTER_OP(_npi_pinv)
+.describe(R"code()code" ADD_FILELINE)
+.set_attr_parser(mxnet::op::ParamParser)
+.set_num_inputs(2)
+.set_num_outputs(1)
+.set_attr("FListInputNames", [](const NodeAttrs& attrs){
+  return std::vector{"A", "rcond"};
+})
+.set_attr("FInferShape", PinvOpShape)
+.set_attr("FInferType", PinvOpType)
+.set_attr("FResourceRequest", [](const NodeAttrs& attrs){
+  return std::vector{ResourceRequest::kTempSpace};
+})
+.set_attr("FCompute", PinvOpForward)
+.set_attr("FGradient", MakeZeroGradNodes)
+.add_argument("A", "NDArray-or-Symbol", "Tensor of matrix")
+.add_argument("rcond", "NDArray-or-Symbol", "Cutoff for small singular 
values.")
+.add_arguments(PinvParam::__FIELDS__());
+
+bool PinvScalarRcondOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector *in_attrs,
+mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const mxnet::TShape& a_shape = (*in_attrs)[0];
+  const mxnet::TShape& pinv_shape = (*out_attrs)[0];
+ 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17279: [Numpy] Add linalg.pinv op
URL: https://github.com/apache/incubator-mxnet/pull/17279#discussion_r367175293
 
 

 ##
 File path: src/operator/numpy/linalg/np_pinv.cc
 ##
 @@ -0,0 +1,195 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file np_pinv.cc
+ * \brief CPU implementation of the PINV Operator
+ */
+
+#include "./np_pinv-inl.h"
+
+namespace mxnet {
+namespace op {
+
+bool PinvOpShape(const nnvm::NodeAttrs& attrs,
+ mxnet::ShapeVector *in_attrs,
+ mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const mxnet::TShape& a_shape = (*in_attrs)[0];
+  const mxnet::TShape& rcond_shape = (*in_attrs)[1];
+  const mxnet::TShape& pinv_shape = (*out_attrs)[0];
+  const int a_ndim = a_shape.ndim();
+
+  if (shape_is_known(a_shape)) {
+// Forward shape inference.
+CHECK_GE(a_ndim, 2)
+  << "Array must be at least two-dimensional";
+// Calculte pinv shape.
+std::vector pinv_shape_vec(a_ndim, -1);
+for (int i = 0; i < a_ndim - 2; ++i) {
+  pinv_shape_vec[i] = a_shape[i];
+}
+pinv_shape_vec[a_ndim - 2] = a_shape[a_ndim - 1];
+pinv_shape_vec[a_ndim - 1] = a_shape[a_ndim - 2];
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(pinv_shape_vec.begin(), 
pinv_shape_vec.end()));
+// Check rcond shape.
+GetOrCheckCutoffAndLargeShape(attrs, a_shape, rcond_shape, nullptr, 
nullptr);
+  } else {
+// Backward shape inference.
+if (shape_is_known(pinv_shape)) {
 
 Review comment:
   `else if (shape_is_known(pinv_shape))`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn opened a new issue #17335: Excessive GPU memory usage with dynamic shape input

2020-01-15 Thread GitBox
Jerryzcn opened a new issue #17335: Excessive GPU memory usage with dynamic 
shape input
URL: https://github.com/apache/incubator-mxnet/issues/17335
 
 
   ## Description
   When using threaded engine with dynamic shape input. MXNet uses excessive 
amount of GPU memory. In the following script, MXNet is only able to reach a 
batch size of 16 while TensorFlow, PyTorch and MXNet w/ NaiveEngine can reach a 
batch size of 256
   
   ## To Reproduce
   The following script create a dynamically shaped dataset, dataloader, 
padding batchify, and a simple neural network. It train the network with 
exponentially increasing batch size. For every epoch, the batch size is 
multiply by 2.
   
   https://gist.github.com/Jerryzcn/bc300b431f4c2868158f3a309dc44e78
   
   ## Environment
   
   *AMI*
   Deep Learning AMI (Ubuntu 18.04) Version 26.0 - ami-010a96c958f9ee5cf
   *Instance Type*
   p3.16xlarge
   *Storage*
   GP2 volume with 2000GB capacity
   *Numpy Environment*
   None
   *MXNet Environment*
   MXNet(+Keras2) with Python3 (CUDA 10.1 and Intel MKL-DNN)
   source activate mxnet_p36
   *Pytorch Environment*
   PyTorch with Python3 (CUDA 10.1 and Intel MKL)
source activate pytorch_p36
   *Tensorflow Environment*
   for TensorFlow 2(+Keras2) with Python3 (CUDA 10.0 and Intel MKL-DNN)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on a change in pull request #17334: Don't unnecessarily relicense FindCUDAToolkit.cmake

2020-01-15 Thread GitBox
roywei commented on a change in pull request #17334: Don't unnecessarily 
relicense FindCUDAToolkit.cmake
URL: https://github.com/apache/incubator-mxnet/pull/17334#discussion_r367173016
 
 

 ##
 File path: LICENSE
 ##
 @@ -326,6 +326,8 @@
  Copyright 2005-2015, Google Inc.
 8. OpenMP Testsuite - For details, see, 3rdparty/openmp/testsuite/LICENSE
  Copyright (c) 2011, 2012 University of Houston System
+8. CMake FindCUDAToolkit.cmake - For details, see, 
cmake/Module/FindCUDAToolkit.cmake
 
 Review comment:
   should be 9.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (09cc72e -> 7b349dd)

2020-01-15 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 09cc72e  Update NOTICE to fix copyright years (#17330)
 add 7b349dd  grouping large array tests based on type and updating nightly 
CI function (#17305)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh|4 +-
 tests/nightly/test_large_array.py | 3269 ++---
 2 files changed, 1622 insertions(+), 1651 deletions(-)



[incubator-mxnet] branch master updated (09cc72e -> 7b349dd)

2020-01-15 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 09cc72e  Update NOTICE to fix copyright years (#17330)
 add 7b349dd  grouping large array tests based on type and updating nightly 
CI function (#17305)

No new revisions were added by this update.

Summary of changes:
 ci/docker/runtime_functions.sh|4 +-
 tests/nightly/test_large_array.py | 3269 ++---
 2 files changed, 1622 insertions(+), 1651 deletions(-)



[GitHub] [incubator-mxnet] leezu opened a new pull request #17334: Don't unnecessarily relicense FindCUDAToolkit.cmake

2020-01-15 Thread GitBox
leezu opened a new pull request #17334: Don't unnecessarily relicense 
FindCUDAToolkit.cmake
URL: https://github.com/apache/incubator-mxnet/pull/17334
 
 
   ## Description ##
   Don't unnecessarily relicense FindCUDAToolkit.cmake. As per recommendation 
in https://www.apache.org/legal/src-headers.html#3party
   
   @roywei 
   
   https://github.com/apache/incubator-mxnet/issues/17329
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [X] Don't unnecessarily relicense FindCUDAToolkit.cmake
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest merged pull request #17305: grouping large array tests based on type and updating nightly CI funtion

2020-01-15 Thread GitBox
apeforest merged pull request #17305: grouping large array tests based on type 
and updating nightly CI funtion
URL: https://github.com/apache/incubator-mxnet/pull/17305
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new issue #17333: Consolidate build from source instruction into one page

2020-01-15 Thread GitBox
apeforest opened a new issue #17333: Consolidate build from source instruction 
into one page
URL: https://github.com/apache/incubator-mxnet/issues/17333
 
 
   ## Description
   Currently there are three pages of instructions for building from source:
   
   https://mxnet.apache.org/get_started/build_from_source
   https://mxnet.apache.org/get_started/ubuntu_setup
   https://mxnet.apache.org/get_started/centos_setup.html
   
   They are mostly similar and hard to maintain. Can we just have one page for 
the most common linux platform and refer other uses to user docker if needed?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
roywei edited a comment on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574913499
 
 
   > @roywei thanks. When removing the ASF header from 
incubator-mxnet/cmake/Modules/FindCUDAToolkit.cmake, how to pass the RAT 
license check? There is no `.rat-excludes` files in the mxnet repository. How 
to exclude the file?
   
   here it is: 
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/apache_rat_license_check/rat-excludes
   
   It's used by CI step here: 
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L1457
   
   More reference and history: 
https://cwiki.apache.org/confluence/display/MXNET/MXNet+Source+Licenses


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
roywei commented on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574913499
 
 
   > @roywei thanks. When removing the ASF header from 
incubator-mxnet/cmake/Modules/FindCUDAToolkit.cmake, how to pass the RAT 
license check? There is no `.rat-excludes` files in the mxnet repository. How 
to exclude the file?
   
   here it is: 
https://github.com/apache/incubator-mxnet/blob/master/tests/nightly/apache_rat_license_check/rat-excludes
   More reference and history: 
https://cwiki.apache.org/confluence/display/MXNET/MXNet+Source+Licenses


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
leezu commented on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574911354
 
 
   @roywei thanks. When removing the ASF header from 
incubator-mxnet/cmake/Modules/FindCUDAToolkit.cmake, how to pass the RAT 
license check? There is no `.rat-excludes` files in the mxnet repository. How 
to exclude the file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin opened a new issue #17332: Race condition in downloading model from model zoo in parallel

2020-01-15 Thread GitBox
eric-haibin-lin opened a new issue #17332: Race condition in downloading model 
from model zoo in parallel 
URL: https://github.com/apache/incubator-mxnet/issues/17332
 
 
   When i use horovod for training, and call
   
   `model = get_model(model_name, pretrained=True)`
   
   It complains with
   
   ```
   Exception in thread Thread-5:
   Traceback (most recent call last):
 File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
   self.run()
 File "/usr/lib/python3.6/threading.py", line 864, in run
   self._target(*self._args, **self._kwargs)
 File "tests/python/unittest/test_gluon_model_zoo.py", line 32, in fn
   model = get_model(model_name, pretrained=True, root='parallel_model/')
 File 
"/home/ubuntu/src/mxnet/python/mxnet/gluon/model_zoo/vision/__init__.py", line 
152, in get_model
   return models[name](**kwargs)
 File 
"/home/ubuntu/src/mxnet/python/mxnet/gluon/model_zoo/vision/mobilenet.py", line 
375, in mobilenet_v2_0_25
   return get_mobilenet_v2(0.25, **kwargs)
 File 
"/home/ubuntu/src/mxnet/python/mxnet/gluon/model_zoo/vision/mobilenet.py", line 
250, in get_mobilenet_v2
   get_model_file('mobilenetv2_%s' % version_suffix, root=root), ctx=ctx)
 File "/home/ubuntu/src/mxnet/python/mxnet/gluon/model_zoo/model_store.py", 
line 115, in get_model_file
   os.remove(zip_file_path)
   FileNotFoundError: [Errno 2] No such file or directory: 
'parallel_model/mobilenetv2_0.25-ae8f9392.zip'
   ```
   The get_model API breaks if multiple processes are doing it at the same 
time. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
roywei edited a comment on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574909620
 
 
   > > @hzfan @reminisce @haojin2 Looks like we need to remove the ASF header 
if this file is directly copied from numpy, and add it to whitelist of the 
header check.
   > > 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/numpy/np_einsum_op-inl.h
   > 
   > @roywei The file includes modifications, such as
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/src/operator/numpy/np_einsum_op-inl.h#L1088
   > 
   > It seems valid to re-license the file under Apache license given that the 
original license header is included. (Above link suggests this approach is not 
recommended though)
   > 
   > If we choose not to relicense files with minor modifications, 
https://github.com/apache/incubator-mxnet/blob/master/cmake/Modules/FindCUDAToolkit.cmake
 also needs to be updated to remove the ASF header.
   > The file contains the following minor modifications
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/cmake/Modules/FindCUDAToolkit.cmake#L713-L715
   
   I'll leave the authors to decide whether to add Apache license if it's 
modified by MXNet contributors. Either way, we also need to acknowledge the 
original License and CopyRight in the 
[LICENSE](https://github.com/apache/incubator-mxnet/blob/master/LICENSE) file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new issue #17331: [mxnet 2.0] Turning on large tensor support by default

2020-01-15 Thread GitBox
apeforest opened a new issue #17331: [mxnet 2.0] Turning on large tensor 
support by default
URL: https://github.com/apache/incubator-mxnet/issues/17331
 
 
   ## Description
   Currently, MXNet only supports tensor size smaller than 2^31. To support 
large tensors, users need to recompile MXNet with USE_INT64_TENSOR_SIZE 
compiler flag set to ON. 
   
   Large tensor is used often in applications such as recommendation system 
with sparse embedding matrix and graph neural networks such as DGL.
   
   To provide a better user experience, we would like to turn on this compiler 
flag by default so that MXNet binary release will support large tensors.
   
   RFC: 
https://lists.apache.org/thread.html/df53b8c26e9e0433378dd803baba9fec4dd922728a5ce9135dc164b3@%3Cdev.mxnet.apache.org%3E
 
   
   ## Current Status:
   Large tensor support is already implemented in MXNet backend and C API. Over 
80 operators have been tested and more are being tested.
   
   There was performance degradation in a few operators such as transpose and 
it has been fixed (https://github.com/apache/incubator-mxnet/pull/16104)
   
   ## TODO
   - update MXNet development doc and FAQ for adding new operators 
   (@ChaiBapchya )
   - turning on nightly tests for large tensor (@access2rohit )
   - adding end-to-end tests for a list of models (TBD)
   - setting the flag to ON and clean up
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
roywei commented on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574909620
 
 
   > > @hzfan @reminisce @haojin2 Looks like we need to remove the ASF header 
if this file is directly copied from numpy, and add it to whitelist of the 
header check.
   > > 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/numpy/np_einsum_op-inl.h
   > 
   > @roywei The file includes modifications, such as
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/src/operator/numpy/np_einsum_op-inl.h#L1088
   > 
   > It seems valid to re-license the file under Apache license given that the 
original license header is included. (Above link suggests this approach is not 
recommended though)
   > 
   > If we choose not to relicense files with minor modifications, 
https://github.com/apache/incubator-mxnet/blob/master/cmake/Modules/FindCUDAToolkit.cmake
 also needs to be updated to remove the ASF header.
   > The file contains the following minor modifications
   > 
   > 
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/cmake/Modules/FindCUDAToolkit.cmake#L713-L715
   
   I'll leave the authors to decide whether to add Apache license if it's 
modified by MXNet contributors. Either way, we also need to acknowledge the 
License and CopyRight in the 
[LICENSE](https://github.com/apache/incubator-mxnet/blob/master/LICENSE) file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (bd67723 -> 09cc72e)

2020-01-15 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bd67723  Fix operators lying about their number of inputs (#17049)
 add 09cc72e  Update NOTICE to fix copyright years (#17330)

No new revisions were added by this update.

Summary of changes:
 NOTICE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[incubator-mxnet] branch master updated (bd67723 -> 09cc72e)

2020-01-15 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bd67723  Fix operators lying about their number of inputs (#17049)
 add 09cc72e  Update NOTICE to fix copyright years (#17330)

No new revisions were added by this update.

Summary of changes:
 NOTICE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-mxnet] leezu merged pull request #17330: update notice

2020-01-15 Thread GitBox
leezu merged pull request #17330: update notice
URL: https://github.com/apache/incubator-mxnet/pull/17330
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
leezu commented on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574907676
 
 
   > @hzfan @reminisce @haojin2 Looks like we need to remove the ASF header if 
this file is directly copied from numpy, and add it to whitelist of the header 
check.
   > 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/numpy/np_einsum_op-inl.h
   
   @roywei The file includes modifications, such as
   
   
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/src/operator/numpy/np_einsum_op-inl.h#L1088
   
   It seems valid to re-license the file under Apache license given that the 
original license header is included. (Above link suggests this approach is not 
recommended though)
   
   If we choose not to relicense files with minor modifications, 
https://github.com/apache/incubator-mxnet/blob/master/cmake/Modules/FindCUDAToolkit.cmake
 also needs to be updated to remove the ASF header.
   The file contains the following minor modifications
   
   
https://github.com/apache/incubator-mxnet/blob/bd67723da96e6d36e72c9a42535a4fe68f234a71/cmake/Modules/FindCUDAToolkit.cmake#L713-L715


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17234: Op Quantile/Percentile [Numpy]

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile 
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r367163259
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6023,6 +6023,159 @@ def nonzero(a):
 return tuple([out[i] for i in range(len(out))])
 
 
+@set_module('mxnet.ndarray.numpy')
+def percentile(a, q, axis=None, out=None, overwrite_input=None, 
interpolation='linear', keepdims=False): # pylint: disable=too-many-arguments
+"""
+Compute the q-th percentile of the data along the specified axis.
+Returns the q-th percentile(s) of the array elements.
+
+Parameters
+--
+a : array_like
 
 Review comment:
   better use `ndarray`, since `array_like` could also include list, tuple etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #16492: Broken Links - Python Tutorial | Getting Started

2020-01-15 Thread GitBox
TEChopra1000 commented on issue #16492: Broken Links - Python Tutorial | 
Getting Started 
URL: 
https://github.com/apache/incubator-mxnet/issues/16492#issuecomment-574905381
 
 
   @aaronmarkham these links have been fixed. This ticket can be closed. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #17134: add batch_axis in validation handler

2020-01-15 Thread GitBox
roywei commented on issue #17134: add batch_axis in validation handler
URL: https://github.com/apache/incubator-mxnet/pull/17134#issuecomment-574903912
 
 
   @liuzh91 could you rebase your PR so CI will pass?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (be9e17e -> bd67723)

2020-01-15 Thread dickjc123
This is an automated email from the ASF dual-hosted git repository.

dickjc123 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from be9e17e  Fixes #17304 Flaky Test -> test_higher_order_grad.test_tanh 
(#17321)
 add bd67723  Fix operators lying about their number of inputs (#17049)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/bilinear_resize-inl.h |  9 
 src/operator/contrib/bilinear_resize.cc|  2 +-
 src/operator/contrib/bounding_box.cc   |  2 +-
 src/operator/contrib/roi_align.cc  |  1 +
 src/operator/custom/custom.cc  |  2 +-
 src/operator/image/image_random.cc |  2 +-
 src/operator/leaky_relu.cc | 13 ++
 src/operator/nn/batch_norm.cc  |  1 +
 src/operator/nn/concat.cc  |  8 
 src/operator/nn/convolution.cc |  4 ++
 src/operator/nn/ctc_loss.cc|  2 +-
 src/operator/nn/deconvolution.cc   |  4 ++
 src/operator/nn/lrn.cc |  1 +
 src/operator/nn/pooling.cc |  5 +++
 src/operator/nn/softmax_activation.cc  |  1 +
 src/operator/nn/upsampling.cc  |  8 
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  2 +-
 src/operator/operator_common.h | 52 +++---
 src/operator/rnn.cc| 14 ++
 src/operator/softmax_output.cc |  1 +
 src/operator/tensor/broadcast_reduce_norm_value.cc |  1 +
 21 files changed, 105 insertions(+), 30 deletions(-)



[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #16509: autograd tutorial is missing its images

2020-01-15 Thread GitBox
TEChopra1000 commented on issue #16509: autograd tutorial is missing its images
URL: 
https://github.com/apache/incubator-mxnet/issues/16509#issuecomment-574903662
 
 
   @aaronmarkham this issue has been resolved. You can close this ticket. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei edited a comment on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
roywei edited a comment on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574829986
 
 
   @PatricZhao could you help with the "Copyright 2019 Intel Corporation” and 3 
and 4 under mkldnn? Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (be9e17e -> bd67723)

2020-01-15 Thread dickjc123
This is an automated email from the ASF dual-hosted git repository.

dickjc123 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from be9e17e  Fixes #17304 Flaky Test -> test_higher_order_grad.test_tanh 
(#17321)
 add bd67723  Fix operators lying about their number of inputs (#17049)

No new revisions were added by this update.

Summary of changes:
 src/operator/contrib/bilinear_resize-inl.h |  9 
 src/operator/contrib/bilinear_resize.cc|  2 +-
 src/operator/contrib/bounding_box.cc   |  2 +-
 src/operator/contrib/roi_align.cc  |  1 +
 src/operator/custom/custom.cc  |  2 +-
 src/operator/image/image_random.cc |  2 +-
 src/operator/leaky_relu.cc | 13 ++
 src/operator/nn/batch_norm.cc  |  1 +
 src/operator/nn/concat.cc  |  8 
 src/operator/nn/convolution.cc |  4 ++
 src/operator/nn/ctc_loss.cc|  2 +-
 src/operator/nn/deconvolution.cc   |  4 ++
 src/operator/nn/lrn.cc |  1 +
 src/operator/nn/pooling.cc |  5 +++
 src/operator/nn/softmax_activation.cc  |  1 +
 src/operator/nn/upsampling.cc  |  8 
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  2 +-
 src/operator/operator_common.h | 52 +++---
 src/operator/rnn.cc| 14 ++
 src/operator/softmax_output.cc |  1 +
 src/operator/tensor/broadcast_reduce_norm_value.cc |  1 +
 21 files changed, 105 insertions(+), 30 deletions(-)



[GitHub] [incubator-mxnet] DickJC123 merged pull request #17049: Fix operators lying about their number of inputs

2020-01-15 Thread GitBox
DickJC123 merged pull request #17049: Fix operators lying about their number of 
inputs
URL: https://github.com/apache/incubator-mxnet/pull/17049
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #17329: License issue with 1.6.0.rc1

2020-01-15 Thread GitBox
aaronmarkham commented on issue #17329: License issue with 1.6.0.rc1
URL: 
https://github.com/apache/incubator-mxnet/issues/17329#issuecomment-574900706
 
 
   > @aaronmarkham any idea how to fix the font files in #2? Can we move it to 
s3 and download it when building the docs?
   
   I'd research the fonts' licenses, add the appropriate license references, 
and leave them where they are. Moving to s3 would just add to site latency, and 
we don't want that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17265: Add bfloat16 floating-point format support based on AMP

2020-01-15 Thread GitBox
eric-haibin-lin commented on issue #17265: Add bfloat16 floating-point format 
support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-574898654
 
 
   @ElaineBao thanks for the explanation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on a change in pull request #17325: Fix Flaky Test Higher Order Grad

2020-01-15 Thread GitBox
larroy commented on a change in pull request #17325: Fix Flaky Test Higher 
Order Grad
URL: https://github.com/apache/incubator-mxnet/pull/17325#discussion_r367152446
 
 

 ##
 File path: python/mxnet/test_utils.py
 ##
 @@ -102,6 +102,16 @@ def random_arrays(*shapes):
 return arrays
 
 
+def random_uniform_arrays(*shapes, low=0.0, high=1.0):
+"""Generate some random numpy arrays."""
+arrays = [np.array(np.random.uniform(low, high), dtype=default_dtype())
+  if len(s) == 0 else np.random.uniform(low, high, 
size=s).astype(default_dtype())
+  for s in shapes]
 
 Review comment:
   I don't like to return different types based on inputs, the function should 
just return a list of arrays. other than that PR looks fine and approach is 
good. thanks for the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17297: Fix NCCL Cmake autodetect issue

2020-01-15 Thread GitBox
leezu commented on a change in pull request #17297: Fix NCCL Cmake autodetect 
issue
URL: https://github.com/apache/incubator-mxnet/pull/17297#discussion_r367149908
 
 

 ##
 File path: cmake/Modules/FindNCCL.cmake
 ##
 @@ -33,6 +33,23 @@
 
 set(NCCL_ROOT_DIR "" CACHE PATH "Folder contains NVIDIA NCCL")
 
+# first check in the /usr/local/cuda before other paths
 
 Review comment:
   "first check" may break the assumption that `NCCL_ROOT_DIR` can be used to 
specify nccl root directory?
   
   Would adding `/usr/local/cuda` as last item in `find_path`, `find_library` 
below be sufficient? Or does it throw an error if `/usr/local/cuda` doesn't 
exist?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #17234: Op Quantile/Percentile [Numpy]

2020-01-15 Thread GitBox
haojin2 commented on issue #17234: Op Quantile/Percentile [Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#issuecomment-574894032
 
 
   Also, a bad non-ASCII character: 
   ```
   Failure: SyntaxError (Non-ASCII character '\xe2' in file 
/work/mxnet/python/mxnet/ndarray/numpy/_op.py on line 6050, but no encoding 
declared; see http://python.org/dev/peps/pep-0263/ for details (_op.py, line 
6049))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #17234: Op Quantile/Percentile [Numpy]

2020-01-15 Thread GitBox
haojin2 commented on issue #17234: Op Quantile/Percentile [Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#issuecomment-574893569
 
 
   address the sanity issues: 
http://jenkins.mxnet-ci.amazon-ml.com/job/mxnet-validation/job/sanity/job/PR-17234/5/display/redirect


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367147434
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector* in_attrs,
+mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+return false;
+  }
+  const 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146763
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector* in_attrs,
+mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+return false;
+  }
+  const 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146951
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector* in_attrs,
+mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+return false;
+  }
+  const 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367147044
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector* in_attrs,
+mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+return false;
+  }
+  const 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146462
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
 
 Review comment:
   2-space indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146496
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
 
 Review comment:
   2-space indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146557
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
 
 Review comment:
   2-space indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146521
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
 
 Review comment:
   2-space indentation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146039
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
+---
+pad : ndarray
+Padded array of rank equal to `array` with shape increased
+according to `pad_width`.
+Examples
+
+>>> a = [1, 2, 3, 4, 5]
+>>> np.pad(a, (2, 3), 'edge')
+array([1, 1, 1, ..., 5, 5, 5])
+>>> np.pad(a, (2, 2), 'maximum')
+array([5, 5, 1, 2, 3, 4, 5, 5, 5])
+>>> np.pad(a, (2, 2), 'mean')
+array([3, 3, 1, 2, 3, 4, 5, 3, 3])
+>>> a = [[1, 2], [3, 4]]
+>>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
+array([[1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [3, 3, 3, 4, 3, 3, 3],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1]])
+>>> a = [1, 2, 3, 4, 5]
+>>> np.pad(a, (2, 3), 'reflect')
+array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
+>>> np.pad(a, (2, 3), 'symmetric')
+array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
+>>> a = np.arange(6)
+>>> a = a.reshape((2, 3))
+>>> np.pad(a, ((2, 2), (2, 2)), pad_with)
+array([[10, 10, 10, 10, 10, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10],
+   [10, 10,  0,  1,  2, 10, 10],
+   [10, 10,  3,  4,  5, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10]])
+"""
+if mode == "constant":
+return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+elif mode == "symmetric" and reflect_type == "even":
+return _npi.pad(array, pad_width, 2, "even", constant_values)
+elif mode == "edge":
+return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+elif mode == "reflect" and reflect_type == "even":
+return _npi.pad(array, pad_width, 4, "even", constant_values)
+elif mode == "empty":
+pass
+elif mode == "maximum":
+return _npi.pad(array, pad_width, 5, "even", constant_values)
+elif mode == "minimum":
+return _npi.pad(array, pad_width, 6, "even", constant_values)
+else:
+raise ValueError(
+"didn't support these modes and reflect_types."
+)
 
 Review comment:
   No need for line wrap here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146226
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
 
 Review comment:
   extra blank line above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146145
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
+---
+pad : ndarray
+Padded array of rank equal to `array` with shape increased
+according to `pad_width`.
+Examples
 
 Review comment:
   No need for examples section for symbolic interface.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367146199
 
 

 ##
 File path: python/mxnet/symbol/numpy/_symbol.py
 ##
 @@ -5866,4 +5866,116 @@ def bincount(x, weights=None, minlength=0):
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
 
 
+@set_module('mxnet.symbol.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
 
 Review comment:
   extra blank line above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367145836
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -5597,6 +5597,59 @@ def hybrid_forward(self,F,a):
 assert_almost_equal(mx_out.asnumpy(), np_out, rtol=rtol, atol=atol)
 
 
+@with_seed()
+@use_np
+def test_np_pad():
+class TestPad(HybridBlock):
+def __init__(self, pad_width = (), mode="constant", 
reflect_type="even", constant_values=0):
+super(TestPad,self).__init__()
+self._pad_width = pad_width
+self._mode = mode
+self._reflect_type =reflect_type
+self._constant_values = constant_values
+def hybrid_forward(self,F,A):
+return F.np.pad(A, self._pad_width, mode=self._mode, 
reflect_type=self._reflect_type, constant_values=self._constant_values)
+shapes = [(1,5), (2,2), (2,2), (3,3), (2,3), (3,4,5)]
+dtypes = [np.int8, np.uint8, np.int32, np.int64, np.float16, np.float32, 
np.float64]
+mode = ['constant', 'reflect', 'symmetric', 'edge', 'minimum']
+for hybridize, shape, dtype, in itertools.product([False,True], shapes, 
dtypes):
+rtol = 1e-2 if dtype == np.float16 else 1e-3
+atol = 1e-4 if dtype == np.float16 else 1e-5
+
+for m in mode:
+x = np.random.uniform(-1.0, 1.0, size = shape).astype(dtype)
+pw = ()
+if (type(shape) == int):
+pw += (2,3)
+else:
+for i in range(len(shape)):
+pw += ((2,3),)
+test_pad = TestPad(pw, m, "even", 0)
+if hybridize:
+test_pad.hybridize()
+x.attach_grad()
+
+if(m != 'constant'):
+np_out = _np.pad(x.asnumpy(), pw, mode=m)
+else:
+np_out = _np.pad(x.asnumpy(), pw, mode=m, constant_values=0)
+with mx.autograd.record():
+mx_out = test_pad(x)
+
+# Code to get the reference backward value
+assert mx_out.shape == np_out.shape
+assert_almost_equal(mx_out.asnumpy(), np_out, rtol = rtol, atol = 
atol)
 
 Review comment:
   we'll also have to check the gradient since we have the backward implemented.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144773
 
 

 ##
 File path: src/operator/numpy/np_pad_op.cu
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op.cu
+ * \brief GPU Implementation of numpy pad operations
+ */
+
+#include "./np_pad_op-inl.h"
+#include "../nn/concat-inl.h"
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_npi_pad)
+.set_attr("FCompute", NumpyPadOpForward)
+.set_attr("FResourceRequest",
+  [](const NodeAttrs& attrs) {
+return std::vector{ResourceRequest::kTempSpace};
+  });
 
 Review comment:
   ```c++
   NNVM_REGISTER_OP(_npi_pad)
   .set_attr("FCompute", NumpyPadOpForward);
   ```
   is enough, the `FResourceRequest` is already registered in `.cc` file.
   Same for the op below.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144282
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   Move this function to `.cc`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, plea

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144346
 
 

 ##
 File path: src/operator/numpy/np_pad_op-inl.h
 ##
 @@ -0,0 +1,735 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file np_pad_op-inl.h
+ * \brief Function definition of matrix related operators
+ */
+#ifndef MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_PAD_OP_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "../tensor/matrix_op-inl.h"
+#include "../nn/concat-inl.h"
+#include "../../common/utils.h"
+#include "../mxnet_op.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+template 
+MSHADOW_XINLINE index_t rravel(const mshadow::Shape& coord,
+   const mshadow::Tensor& shape) {
+  index_t ret = 0;
+  #pragma unroll
+  for (int i = 0; i < ndim; ++i) {
+ret = ret * shape[i] + (shape[i] > coord[i]) * coord[i];
+  }
+  return ret;
+}
+
+template
+MSHADOW_XINLINE mshadow::Shape uunravel(index_t idx,
+  const mshadow::Tensor& shape) {
+  mshadow::Shape ret;
+  #pragma unroll
+  for (index_t i = ndim-1, j = idx; i >=0; --i) {
+auto tmp = j / shape[i];
+ret[i] = j - tmp*shape[i];
+j = tmp;
+  }
+  return ret;
+}
+
+struct NumpyPadParam : public dmlc::Parameter {
+  mxnet::Tuple> pad_width;
+  int mode;
+  std::string reflect_type;
+  double constant_value;
+  DMLC_DECLARE_PARAMETER(NumpyPadParam) {
+DMLC_DECLARE_FIELD(pad_width)
+.describe("Number of values padded to the edges of each axis. "
+  "((before_1, after_1), … (before_N,"
+  "after_N)) unique pad widths for each axis. ((before, 
after),) "
+  "yields same before and"
+  "after pad for each axis. "
+  "(pad,) or int is a shortcut for before = after = pad width 
for all"
+  "axes.");
+DMLC_DECLARE_FIELD(mode)
+.set_default(1)
+.describe("str or function, optional");
+DMLC_DECLARE_FIELD(reflect_type)
+.set_default("even")
+.describe("Used in ‘reflect’, and ‘symmetric’. "
+  "The ‘even’ style is the default with an unaltered 
reflection around "
+  "the edge value. For the ‘odd’ style,"
+  "the extended part of the array is created by subtracting 
the "
+  "reflected values from two times the edge value.");
+DMLC_DECLARE_FIELD(constant_value)
+.set_default(0.0)
+.describe("Used in ‘constant’. The values to set the padded values for 
each axis."
+  "((before_1, after_1), ... (before_N, after_N)) unique pad 
constants for"
+  "each axis."
+  "((before, after),) yields same before and after constants 
for each axis."
+  "(constant,) or constant is a shortcut for before = after = 
constant for all"
+  "axes."
+  "Default is 0.");
+  }
+};
+
+inline mxnet::TShape NumpyPadShapeImpl(const mxnet::TShape& ishape,
+   const mxnet::Tuple> 
pad_width) {
+  if (ishape.ndim() == 1) {
+auto s = ishape[0] + pad_width[0][0] + pad_width[1][0];
+return mxnet::TShape({s});
+  } else if (ishape.ndim() >= 2) {
+int i;
+int sshape_number = ishape.ndim();
+mxnet::TShape oshape(ishape.ndim(), -1);
+for (i = ishape.ndim() - 1; i >=0; i--) {
+  int base = ishape[i];
+  base = base + pad_width[i][0] + pad_width[i][1];
+  oshape[i] = base;
+}
+  return oshape;
+  }
+  return mxnet::TShape({-1, -1});
+}
+
+inline bool NumpyPadOpShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector* in_attrs,
+mxnet::ShapeVector* out_attrs) {
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& ishape = (*in_attrs)[0];
+  if (!mxnet::ndim_is_known(ishape)) {
+return false;
+  }
+  const 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367144014
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
 if weights is None:
 return _npi.bincount(x, minlength=minlength, has_weights=False)
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
+---
+pad : ndarray
+Padded array of rank equal to `array` with shape increased
+according to `pad_width`.
+Examples
 
 Review comment:
   extra blank line above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143987
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
 if weights is None:
 return _npi.bincount(x, minlength=minlength, has_weights=False)
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
 
 Review comment:
   extra blank line above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143959
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
 if weights is None:
 return _npi.bincount(x, minlength=minlength, has_weights=False)
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
 
 Review comment:
   extra blank line above


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-01-15 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r367143100
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6476,3 +6475,126 @@ def bincount(x, weights=None, minlength=0):
 if weights is None:
 return _npi.bincount(x, minlength=minlength, has_weights=False)
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(array, pad_width, mode="constant", reflect_type="even", 
constant_values=0):
+"""
+Pad an array.
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
+Padding function, see Notes.
+stat_length : not supported yet
+constant_values : scalar, optional
+Used in 'constant'.  The values to set the padded values for each
+axis.
+Default is 0.
+end_values : not supported yet
+reflect_type : {'even', 'odd'}, optional
+only support even now
+Returns
+---
+pad : ndarray
+Padded array of rank equal to `array` with shape increased
+according to `pad_width`.
+Examples
+
+>>> a = [1, 2, 3, 4, 5]
+>>> np.pad(a, (2, 3), 'edge')
+array([1, 1, 1, ..., 5, 5, 5])
+>>> np.pad(a, (2, 2), 'maximum')
+array([5, 5, 1, 2, 3, 4, 5, 5, 5])
+>>> np.pad(a, (2, 2), 'mean')
+array([3, 3, 1, 2, 3, 4, 5, 3, 3])
+>>> a = [[1, 2], [3, 4]]
+>>> np.pad(a, ((3, 2), (2, 3)), 'minimum')
+array([[1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1],
+   [3, 3, 3, 4, 3, 3, 3],
+   [1, 1, 1, 2, 1, 1, 1],
+   [1, 1, 1, 2, 1, 1, 1]])
+>>> a = [1, 2, 3, 4, 5]
+>>> np.pad(a, (2, 3), 'reflect')
+array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2])
+>>> np.pad(a, (2, 3), 'symmetric')
+array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3])
+>>> a = np.arange(6)
+>>> a = a.reshape((2, 3))
+>>> np.pad(a, ((2, 2), (2, 2)), pad_with)
+array([[10, 10, 10, 10, 10, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10],
+   [10, 10,  0,  1,  2, 10, 10],
+   [10, 10,  3,  4,  5, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10],
+   [10, 10, 10, 10, 10, 10, 10]])
+"""
+
+if array.size == 0:
+for axis, width_pair in zip(axes, pad_width):
+if array.shape[axis] == 0 and any(width_pair):
+raise ValueError(
+"can't extend empty axis {} using modes other than "
+"'constant' or 'empty'".format(axis)
+)
+else:
+if mode == "constant":
+return _npi.pad(array, pad_width, 1, reflect_type, constant_values)
+elif mode == "symmetric" and reflect_type == "even":
+return _npi.pad(array, pad_width, 2, "even", constant_values)
+elif mode == "edge":
+return _npi.pad(array, pad_width, 3, reflect_type, constant_values)
+elif mode == "reflect" and reflect_type == "even":
+return _npi.pad(array, pad_width, 4, "even", constant_values)
+elif mode == "empty":
+pass
+elif mode == "maximum":
+return _npi.pad(array, pad_width, 5, "even", constant_values)
+elif mode == "minimum":
+return _npi.pad(array, pad_width, 6, "even", constant_values)
+else:
+raise ValueError(
+"didn't support these 

  1   2   3   >