[GitHub] [incubator-mxnet] pengzhao-intel edited a comment on issue #14619: [Discussion] 1.5.0 Roadmap

2019-05-06 Thread GitBox
pengzhao-intel edited a comment on issue #14619: [Discussion] 1.5.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-480110642
 
 
   MKLDNN Quantization PR
   
   
   Name | PR# | status
   -- | -- | --
   sum | #14614 | DONE
   relu | #14604 | DONE
   BN | | TODO
   refactor requantize | #14608 | DONE
   improve quantize | #14641 | DONE
   conv + activation | #14819 | under reveiw
   conv1d enhance | | @ciyongch 
   FC1d enhance | | @wuxun-zhang
   cache op | #14785| reverted, WIP
   quantization flow to support 0 shape (RNN, concat) | | @ciyongch  
   New models (SSD COCO/RN18/MobileNet v2) | #14646, #14823 | DONE
   
   
   FP32 optimization
   
   Name | PR# | status
   -- | -- | --
   data loader for CPU | #14824 | DONE
   transpose | #14545 | DONE
   RNN   refactor with NNVM | #14476 | DONE
   reshape enhance | #14903 | under reveiw
   sum1d | | @ciyongch 
   softmax 1d | #14818 | under review
   MKL Math (ERF, mean, etc) | #14893 | under review
   Build (Window/Linux) | #14740, #14743, 
https://github.com/dmlc/mshadow/pull/374, #14829 | @yinghu5 @NeoZhangJianyu
   Update MKLDNN to 0.19 | #14783 | WIP
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel edited a comment on issue #14619: [Discussion] 1.5.0 Roadmap

2019-05-06 Thread GitBox
pengzhao-intel edited a comment on issue #14619: [Discussion] 1.5.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-480110642
 
 
   MKLDNN Quantization PR
   
   
   Name | PR# | status
   -- | -- | --
   sum | #14614 | DONE
   relu | #14604 | DONE
   BN | | TODO
   refactor requantize | #14608 | DONE
   improve quantize | #14641 | DONE
   conv + activation | #14819 | under reveiw
   conv1d enhance | | @ciyongch 
   FC1d enhance | | @wuxun-zhang
   cache op | #14785| reverted, WIP
   quantization flow to support 0 shape (RNN, concat) | | @ciyongch  
   New models (SSD COCO/RN18/MobileNet v2) | #14646, #14823 | DONE
   
   
   FP32 optimization
   
   Name | PR# | status
   -- | -- | --
   data loader for CPU | #14824 | DONE
   transpose | #14545 | DONE
   RNN   refactor with NNVM | #14476 | DONE
   reshape enhance | #14903 | under reveiw
   sum1d | | @ciyongch 
   softmax 1d | #14818 | under review
   MKL Math (ERF, mean, etc) | | @TaoLv 
   Build (Window/Linux) | #14740, #14743, 
https://github.com/dmlc/mshadow/pull/374, #14829 | @yinghu5 @NeoZhangJianyu
   Update MKLDNN to 0.19 | #14783 | WIP
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] shuokay commented on issue #14735: How to update parameter manually with gluon in training loop

2019-05-06 Thread GitBox
shuokay commented on issue #14735: How to update parameter manually with gluon 
in training loop
URL: 
https://github.com/apache/incubator-mxnet/issues/14735#issuecomment-489927339
 
 
   @eric-haibin-lin I think distribute training need `update_on_kvstore=True`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #14735: How to update parameter manually with gluon in training loop

2019-05-06 Thread GitBox
eric-haibin-lin commented on issue #14735: How to update parameter manually 
with gluon in training loop
URL: 
https://github.com/apache/incubator-mxnet/issues/14735#issuecomment-489921326
 
 
   can you set update_on_kvstore=False when creating Trainer? I don't think you 
need to change the type of the kvstore to local


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv opened a new pull request #14903: Fix reshape to add in-place back

2019-05-06 Thread GitBox
TaoLv opened a new pull request #14903: Fix reshape to add in-place back
URL: https://github.com/apache/incubator-mxnet/pull/14903
 
 
   ## Description ##
   Previously in-place option was removed when build with MKL-DNN. It makes 
reshape operator taking much time in mxnet-mkl package. This PR adds the 
in-pace option back and uses a temporal buffer for reordering when the input is 
a MKL-DNN layout.
   
   Besides, this PR also adds a unit test from 
https://github.com/apache/incubator-mxnet/issues/14766
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #14877: Fix the incorrect MKLDNN/MKL logic in cmake

2019-05-06 Thread GitBox
zixuanweeei commented on a change in pull request #14877: Fix the incorrect 
MKLDNN/MKL logic in cmake 
URL: https://github.com/apache/incubator-mxnet/pull/14877#discussion_r281467455
 
 

 ##
 File path: ci/build_windows.py
 ##
 @@ -218,6 +246,8 @@ def main():
 os.environ["OpenCV_DIR"] = "C:\\Program 
Files\\OpenCV-v3.4.1\\build"
 if 'CUDA_PATH' not in os.environ:
 os.environ["CUDA_PATH"] = "C:\\Program Files\\NVIDIA GPU Computing 
Toolkit\\CUDA\\v9.2"
+if 'MKL_ROOT' not in os.environ:
+os.environ["MKL_ROOT"] = "C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries\windows\mkl"
 
 Review comment:
   Not sure about the effects of the escape characters here. But for the path 
string,  I think using a raw string prefix flag (r'the/path/string'), in this 
case, is better.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #14858: add_n operator with MXNet-MKL producing wrong results when input count >4

2019-05-06 Thread GitBox
pengzhao-intel commented on issue #14858: add_n operator with MXNet-MKL 
producing wrong results when input count >4
URL: 
https://github.com/apache/incubator-mxnet/issues/14858#issuecomment-489914748
 
 
   Good catch and fixed now :) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel closed issue #14858: add_n operator with MXNet-MKL producing wrong results when input count >4

2019-05-06 Thread GitBox
pengzhao-intel closed issue #14858: add_n operator with MXNet-MKL producing 
wrong results when input count >4
URL: https://github.com/apache/incubator-mxnet/issues/14858
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (fdd45cf -> 5bda980)

2019-05-06 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository.

patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fdd45cf  Add mkldnn_version.h to pip package (#14899)
 add 5bda980  fix add_n bug: when input mem overlap with output mem, 
results is wrong (#14889)

No new revisions were added by this update.

Summary of changes:
 src/ndarray/ndarray_function.cc|  6 +-
 tests/python/unittest/test_operator.py | 12 
 2 files changed, 17 insertions(+), 1 deletion(-)



[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #14889: fix add_n bug: when input mem overlap with output mem, results is wrong

2019-05-06 Thread GitBox
pengzhao-intel merged pull request #14889: fix add_n bug: when input mem 
overlap with output mem, results is wrong
URL: https://github.com/apache/incubator-mxnet/pull/14889
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #14877: Fix the incorrect MKLDNN/MKL logic in cmake

2019-05-06 Thread GitBox
zixuanweeei commented on a change in pull request #14877: Fix the incorrect 
MKLDNN/MKL logic in cmake 
URL: https://github.com/apache/incubator-mxnet/pull/14877#discussion_r281462728
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -515,6 +515,48 @@ def compile_windows_cpu() {
 }]
 }
 
+def compile_windows_cpu_mkldnn() {
+return ['Build CPU MKLDNN windows':{
+  node(NODE_WINDOWS_CPU) {
+ws('workspace/build-cpu-mkldnn') {
+  timeout(time: max_time, unit: 'MINUTES') {
+utils.init_git_win()
+powershell 'py -3 ci/build_windows.py -f WIN_CPU_MKLDNN'
+stash includes: 'windows_package.7z', name: 
'windows_package_cpu_mkldnn'
+  }
+}
+  }
+}]
+}
+
+def compile_windows_cpu_mkldnn_mkl() {
+return ['Build CPU MKLDNN MKL windows':{
+  node(NODE_WINDOWS_CPU) {
+ws('workspace/build-cpu-mkldnn-mkl') {
+  timeout(time: max_time, unit: 'MINUTES') {
+utils.init_git_win()
+powershell 'py -3 ci/build_windows.py -f WIN_CPU_MKLDNN_MKL'
+stash includes: 'windows_package.7z', name: 
'windows_package_cpu_mkldnn_mkl'
+  }
+}
+  }
+}]
+}
+
+def compile_windows_cpu_mkl() {
+return ['Build CPU NOMKLDNN MKL windows':{
+  node(NODE_WINDOWS_CPU) {
+ws('workspace/build-cpu-nomkldnn-mkl') {
+  timeout(time: max_time, unit: 'MINUTES') {
+utils.init_git_win()
+powershell 'py -3 ci/build_windows.py -f WIN_CPU_NOMKLDNN_MKL'
 
 Review comment:
   The build flavor(-f) should be `WIN_CPU_MKL` instead of 
`WIN_CPU_NOMKLDNN_MKL` since it has been modified from the latter in 
`ci/build_windows.py` (line 93).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] junrushao1994 commented on issue #14393: [MXNET-1352] Allow dynamic shape in while_loop and if conditionals

2019-05-06 Thread GitBox
junrushao1994 commented on issue #14393: [MXNET-1352] Allow dynamic shape in 
while_loop and if conditionals
URL: https://github.com/apache/incubator-mxnet/pull/14393#issuecomment-489903011
 
 
   Finally it should work...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy commented on issue #14718: [Flaky] Flaky test (test_random_size_crop)

2019-05-06 Thread GitBox
arcadiaphy commented on issue #14718: [Flaky] Flaky test 
(test_random_size_crop) 
URL: 
https://github.com/apache/incubator-mxnet/issues/14718#issuecomment-489901195
 
 
   another one:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwindows-cpu/detail/PR-14894/1/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha edited a comment on issue #14017: Loading parameters from pretrained gluon model

2019-05-06 Thread GitBox
szha edited a comment on issue #14017: Loading parameters from pretrained gluon 
model
URL: 
https://github.com/apache/incubator-mxnet/issues/14017#issuecomment-459629217
 
 
   @MaJieCornell you can still update the parameter files by calling 
`param_dict = mx.nd.load('filename.params')`, which returns a dictionary of 
NDArrays. You can delete some elements from this dictionary, and then 
`mx.nd.save('new_filename.params', param_dict)`, and load that in Gluon with 
`allow_missing=True`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on issue #14017: Loading parameters from pretrained gluon model

2019-05-06 Thread GitBox
szha commented on issue #14017: Loading parameters from pretrained gluon model
URL: 
https://github.com/apache/incubator-mxnet/issues/14017#issuecomment-489899654
 
 
   @Marcovaldong thanks, fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #14902: flaky test: test_bilinear_resize_op

2019-05-06 Thread GitBox
mxnet-label-bot commented on issue #14902: flaky test: test_bilinear_resize_op
URL: 
https://github.com/apache/incubator-mxnet/issues/14902#issuecomment-489899149
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Test, Flaky


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy opened a new issue #14902: flaky test: test_bilinear_resize_op

2019-05-06 Thread GitBox
arcadiaphy opened a new issue #14902: flaky test: test_bilinear_resize_op
URL: https://github.com/apache/incubator-mxnet/issues/14902
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-14894/1/pipeline
   
   @lobanov-m, perhaps there are some errors introduced in #13226?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] bfgray3 opened a new pull request #14901: don't check for nullptr before deleting; closes #14580

2019-05-06 Thread GitBox
bfgray3 opened a new pull request #14901: don't check for nullptr before 
deleting; closes #14580
URL: https://github.com/apache/incubator-mxnet/pull/14901
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy closed issue #14835: Flaky test test_custom_op_exc

2019-05-06 Thread GitBox
arcadiaphy closed issue #14835: Flaky test test_custom_op_exc
URL: https://github.com/apache/incubator-mxnet/issues/14835
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14859: [numpy] Numpy-compatible Mean

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14859: [numpy] 
Numpy-compatible Mean
URL: https://github.com/apache/incubator-mxnet/pull/14859#discussion_r281451940
 
 

 ##
 File path: src/operator/numpy/np_broadcast_reduce_op_value.cc
 ##
 @@ -75,5 +75,56 @@ NNVM_REGISTER_OP(_backward_numpy_sum)
 .set_num_inputs(1)
 .set_attr("FCompute", NumpyReduceAxesBackwardUseNone);
 
+inline bool IsIntType(const int dtype) {
+  return (dtype >= 3);
 
 Review comment:
   Better use enum variable names the than hard-coded number for readability 
and maintenance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #14728: [MXNET-1386] fix for shape mismatch

2019-05-06 Thread GitBox
reminisce commented on issue #14728: [MXNET-1386] fix for shape mismatch
URL: https://github.com/apache/incubator-mxnet/pull/14728#issuecomment-489896911
 
 
   @samskalicky What's the error message did you still see with this fix?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281449349
 
 

 ##
 File path: src/operator/numpy/np_dot-inl.h
 ##
 @@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot-inl.h
+ * \brief Function definition of matrix numpy-compatible dot operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+
+#include 
+#include 
+#include "../tensor/dot-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyDotShape(const nnvm::NodeAttrs& attrs,
+  mxnet::ShapeVector *in_attrs,
+  mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& a_shape = in_attrs->at(0);
+  const mxnet::TShape& b_shape = in_attrs->at(1);
+
+  if (a_shape.ndim() == 1 && b_shape.ndim() == 1) {
+// Case 1: both 1-D arrays, inner product of vectors
+CHECK_EQ(a_shape[0], b_shape[0]);
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 2 && b_shape.ndim() == 2) {
+// Case 2: both 2-D arrays, matrix multiplication
+CHECK_EQ(a_shape[1], b_shape[0]);
+mxnet::TShape mm_shape(2, 0);
+mm_shape[0] = a_shape[0];
+mm_shape[1] = b_shape[1];
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mm_shape);
+  } else if (a_shape.ndim() == 0 && b_shape.ndim() == 0) {
+// Case 3: both 0-D scalars, equivalent to multiply
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 0 || b_shape.ndim() == 0) {
+// Case 3.5: either of them is a scalar, just scale by one of them
+mxnet::TShape oshape = (a_shape.ndim() == 0) ? b_shape : a_shape;
 
 Review comment:
   I think this case actually covers `Case 3`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281450213
 
 

 ##
 File path: src/operator/numpy/np_dot-inl.h
 ##
 @@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot-inl.h
+ * \brief Function definition of matrix numpy-compatible dot operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+
+#include 
+#include 
+#include "../tensor/dot-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyDotShape(const nnvm::NodeAttrs& attrs,
+  mxnet::ShapeVector *in_attrs,
+  mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& a_shape = in_attrs->at(0);
+  const mxnet::TShape& b_shape = in_attrs->at(1);
+
+  if (a_shape.ndim() == 1 && b_shape.ndim() == 1) {
+// Case 1: both 1-D arrays, inner product of vectors
+CHECK_EQ(a_shape[0], b_shape[0]);
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 2 && b_shape.ndim() == 2) {
+// Case 2: both 2-D arrays, matrix multiplication
+CHECK_EQ(a_shape[1], b_shape[0]);
+mxnet::TShape mm_shape(2, 0);
+mm_shape[0] = a_shape[0];
+mm_shape[1] = b_shape[1];
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mm_shape);
+  } else if (a_shape.ndim() == 0 && b_shape.ndim() == 0) {
+// Case 3: both 0-D scalars, equivalent to multiply
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 0 || b_shape.ndim() == 0) {
+// Case 3.5: either of them is a scalar, just scale by one of them
+mxnet::TShape oshape = (a_shape.ndim() == 0) ? b_shape : a_shape;
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+  } else if (b_shape.ndim() == 1) {
+// Case 4: a is N-D array and b is 1-D array, sum product over the last 
axis
+CHECK_EQ(a_shape[a_shape.ndim() - 1], b_shape[0]);
+mxnet::TShape out_shape(a_shape.ndim() - 1, 0);
+for (int i = 0; i < a_shape.ndim() - 1; ++i) {
+  out_shape[i] = a_shape[i];
+}
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, out_shape);
+  } else {
+// Case 5: a is N-D array and b is M-D array, sum product over the last 
axis
+// of a and the 2nd-to-last axis of b
+LOG(FATAL) << "Case 5 not implemented yet...";
+  }
+  return true;
+}
+
+template
+inline void MMImpl(const OpContext& ctx,
+   const TBlob& a,
+   const TBlob& b,
+   const TBlob& out,
+   const OpReqType req,
+   const bool trans_a = false,
+   const bool trans_b = false) {
+  using namespace mshadow;
+  using namespace mshadow_op;
+
+  Stream *s = ctx.get_stream();
+  int ma, na, mb, nb;
 
 Review comment:
   Should they be `indext_t`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281326507
 
 

 ##
 File path: src/operator/numpy/np_dot-inl.h
 ##
 @@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot-inl.h
+ * \brief Function definition of matrix numpy-compatible dot operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+
+#include 
+#include 
+#include "../tensor/dot-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyDotShape(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   Can this be put in the .cc file? We'd better minimize the scope of functions 
in the codebase since we have already exposed too many functions in the header 
which has led to enormous build time once something minor is changed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281451236
 
 

 ##
 File path: src/operator/numpy/np_dot-inl.h
 ##
 @@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot-inl.h
+ * \brief Function definition of matrix numpy-compatible dot operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+
+#include 
+#include 
+#include "../tensor/dot-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyDotShape(const nnvm::NodeAttrs& attrs,
+  mxnet::ShapeVector *in_attrs,
+  mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& a_shape = in_attrs->at(0);
+  const mxnet::TShape& b_shape = in_attrs->at(1);
+
+  if (a_shape.ndim() == 1 && b_shape.ndim() == 1) {
+// Case 1: both 1-D arrays, inner product of vectors
+CHECK_EQ(a_shape[0], b_shape[0]);
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 2 && b_shape.ndim() == 2) {
+// Case 2: both 2-D arrays, matrix multiplication
+CHECK_EQ(a_shape[1], b_shape[0]);
+mxnet::TShape mm_shape(2, 0);
+mm_shape[0] = a_shape[0];
+mm_shape[1] = b_shape[1];
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mm_shape);
+  } else if (a_shape.ndim() == 0 && b_shape.ndim() == 0) {
+// Case 3: both 0-D scalars, equivalent to multiply
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, mxnet::TShape(0, 0));
+  } else if (a_shape.ndim() == 0 || b_shape.ndim() == 0) {
+// Case 3.5: either of them is a scalar, just scale by one of them
+mxnet::TShape oshape = (a_shape.ndim() == 0) ? b_shape : a_shape;
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, oshape);
+  } else if (b_shape.ndim() == 1) {
+// Case 4: a is N-D array and b is 1-D array, sum product over the last 
axis
+CHECK_EQ(a_shape[a_shape.ndim() - 1], b_shape[0]);
+mxnet::TShape out_shape(a_shape.ndim() - 1, 0);
+for (int i = 0; i < a_shape.ndim() - 1; ++i) {
+  out_shape[i] = a_shape[i];
+}
+SHAPE_ASSIGN_CHECK(*out_attrs, 0, out_shape);
+  } else {
+// Case 5: a is N-D array and b is M-D array, sum product over the last 
axis
+// of a and the 2nd-to-last axis of b
+LOG(FATAL) << "Case 5 not implemented yet...";
+  }
+  return true;
+}
+
+template
+inline void MMImpl(const OpContext& ctx,
+   const TBlob& a,
+   const TBlob& b,
+   const TBlob& out,
+   const OpReqType req,
+   const bool trans_a = false,
+   const bool trans_b = false) {
+  using namespace mshadow;
+  using namespace mshadow_op;
+
+  Stream *s = ctx.get_stream();
+  int ma, na, mb, nb;
+  na = a.size(a.ndim() - 1);
+  ma = a.Size() / na;
+  mb = b.size(0);
+  nb = b.Size() / mb;
+  MSHADOW_REAL_TYPE_SWITCH(out.type_flag_, DType, {
+Tensor input0 = a.get_with_shape(Shape2(ma, 
na), s);
+Tensor input1 = b.get_with_shape(Shape2(mb, 
nb), s);
+Tensor output0;
+if (trans_a && trans_b) {
+  output0 = out.get_with_shape(Shape2(na, mb), s);
+  ASSIGN_DISPATCH(output0, req, dot(input0.T(), input1.T()));
+} else if (!trans_a && trans_b) {
+  output0 = out.get_with_shape(Shape2(ma, mb), s);
+  ASSIGN_DISPATCH(output0, req, dot(input0, input1.T()));
+} else if (trans_a && !trans_b) {
+  output0 = out.get_with_shape(Shape2(na, nb), s);
+  ASSIGN_DISPATCH(output0, req, dot(input0.T(), input1));
+} else {
+  output0 = out.get_with_shape(Shape2(ma, nb), s);
+  ASSIGN_DISPATCH(output0, req, dot(input0, input1));
+}
+  });
+}
+
+template
+struct scalar_mul_kernel {
+  template
+  MSHADOW_XINLINE static void Map(int i, DType *out, const DType* tensor, 
const DType *scalar) {
+KERNEL_ASSIGN(out[i], req, tensor[i] * scalar[0]);
+  }
+};
+
+template
+inline void NumpyDotForward(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281451391
 
 

 ##
 File path: src/operator/numpy/np_dot.cc
 ##
 @@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot.cc
+ * \brief CPU Implementation of numpy-compatible dot
+ */
+
+#include "./np_dot-inl.h"
+
+namespace mxnet {
+namespace op {
+
+NNVM_REGISTER_OP(_numpy_dot)
+.describe(R"doc(Dot product of two arrays. Specifically,
+
+- If both a and b are 1-D arrays, it is inner product of vectors (without 
complex conjugation).
+
+- If both a and b are 2-D arrays, it is matrix multiplication, but using 
matmul or a @ b is preferred.
 
 Review comment:
   Please modify the doc to keep aligned with mxnet functionality.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #14831: [numpy] Numpy dot

2019-05-06 Thread GitBox
reminisce commented on a change in pull request #14831: [numpy] Numpy dot
URL: https://github.com/apache/incubator-mxnet/pull/14831#discussion_r281449073
 
 

 ##
 File path: src/operator/numpy/np_dot-inl.h
 ##
 @@ -0,0 +1,284 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_dot-inl.h
+ * \brief Function definition of matrix numpy-compatible dot operator
+ */
+
+#ifndef MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+#define MXNET_OPERATOR_NUMPY_NP_DOT_INL_H_
+
+#include 
+#include 
+#include "../tensor/dot-inl.h"
+#include "../tensor/elemwise_binary_op.h"
+#include "../tensor/broadcast_reduce_op.h"
+
+namespace mxnet {
+namespace op {
+
+inline bool NumpyDotShape(const nnvm::NodeAttrs& attrs,
+  mxnet::ShapeVector *in_attrs,
+  mxnet::ShapeVector *out_attrs) {
+  CHECK_EQ(in_attrs->size(), 2U);
+  CHECK_EQ(out_attrs->size(), 1U);
+
+  const mxnet::TShape& a_shape = in_attrs->at(0);
+  const mxnet::TShape& b_shape = in_attrs->at(1);
+
 
 Review comment:
   Need to check whether both `a_shape` and `b_shape` are known. If not, return 
false.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on issue #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on issue #14208: Add support for fast variable-length 
LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#issuecomment-489888660
 
 
   All unit tests except for the following passed:
   ```
   AssertionError: Diff. between MXNet & TensorRT accuracy too high:
  MXNet = 99.15, TensorRT = 99.14
   ```
   
   So I pushed a new minor change to trigger unit tests to run again and get 
this flaky unit test to pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #14891: [Doc] Add MKL-DNN operator list

2019-05-06 Thread GitBox
TaoLv commented on issue #14891: [Doc] Add MKL-DNN operator list
URL: https://github.com/apache/incubator-mxnet/pull/14891#issuecomment-489884078
 
 
   Hi @aaronmarkham May I have your help? CI complains about converting 
markdown table:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwebsite/detail/PR-14891/3/pipeline/85#step-123-log-1335


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Marcovaldong commented on issue #14017: Loading parameters from pretrained gluon model

2019-05-06 Thread GitBox
Marcovaldong commented on issue #14017: Loading parameters from pretrained 
gluon model
URL: 
https://github.com/apache/incubator-mxnet/issues/14017#issuecomment-489881679
 
 
   > @MaJieCornell you can still update the parameter files by calling 
`param_dict = mx.nd.load('filename.params')`, which returns a dictionary of 
NDArrays. You can delete some elements from this dictionary, and then 
`mx.nd.save(param_dict, 'new_filename.params')`, and load that in Gluon with 
`allow_missing=True`.
   
   @szha  It should be 
   ```
   param_dict = mx.nd.load('filename.params')
   mx.nd.save('new_filename.params', param_dict)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #14889: fix add_n bug: when input mem overlap with output mem, results is wrong

2019-05-06 Thread GitBox
TaoLv commented on a change in pull request #14889: fix add_n bug: when input 
mem overlap with output mem, results is wrong
URL: https://github.com/apache/incubator-mxnet/pull/14889#discussion_r281435713
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -8284,6 +8284,18 @@ def check_concat(shape1, shape2, axis):
 check_concat((8, 0, 0), (8, 0, 0), 2)
 
 
+@with_seed()
+def test_elemwise_sum_add_n():
+data_shape = (2, 2)
+input_num = 5
+data = [mx.nd.random.uniform(shape=data_shape) for i in range(input_num)]
+rslt = mx.nd.zeros(shape=data_shape)
+for i in range(input_num):
+rslt += data[i]
+add_n_rslt = mx.nd.add_n(*data,out=data[0])
 
 Review comment:
   add a space before `out=data[0]`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #14889: fix add_n bug: when input mem overlap with output mem, results is wrong

2019-05-06 Thread GitBox
TaoLv commented on a change in pull request #14889: fix add_n bug: when input 
mem overlap with output mem, results is wrong
URL: https://github.com/apache/incubator-mxnet/pull/14889#discussion_r281435155
 
 

 ##
 File path: src/ndarray/ndarray_function.cc
 ##
 @@ -207,7 +207,10 @@ void ElementwiseSumContainsDnsImpl(mshadow::Stream* 
s,
   using namespace mxnet::op::mxnet_op;
   const TBlob& out_data = out->data();
   MSHADOW_TYPE_SWITCH(out->dtype(), DType, {  // data type
-Kernel::Launch(s, out_data.Size(), out_data.dptr());
+// Do not set_zero if output mem inplace with input mem: elemwise_sum.cc 
FInplaceOption
 
 Review comment:
   Make this comment easy to understand. Add comment that output can be 
in-placed with the *first* input.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #14889: fix add_n bug: when input mem overlap with output mem, results is wrong

2019-05-06 Thread GitBox
TaoLv commented on a change in pull request #14889: fix add_n bug: when input 
mem overlap with output mem, results is wrong
URL: https://github.com/apache/incubator-mxnet/pull/14889#discussion_r281435577
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -8284,6 +8284,18 @@ def check_concat(shape1, shape2, axis):
 check_concat((8, 0, 0), (8, 0, 0), 2)
 
 
+@with_seed()
+def test_elemwise_sum_add_n():
 
 Review comment:
   test_add_n() ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (0255dd6 -> fdd45cf)

2019-05-06 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0255dd6  [Dependency Update] Upgrade cuDNN & NCCL (#14884)
 add fdd45cf  Add mkldnn_version.h to pip package (#14899)

No new revisions were added by this update.

Summary of changes:
 tools/pip/setup.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [incubator-mxnet] TaoLv commented on issue #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
TaoLv commented on issue #14899: Add mkldnn_version.h to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899#issuecomment-489872739
 
 
   Thank you for the fix @yuxihu. Merging now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv merged pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
TaoLv merged pull request #14899: Add mkldnn_version.h to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-05-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 89ec1da  Bump the publish timestamp.
89ec1da is described below

commit 89ec1da7420fcdd5d5f744e931f4f6bc0819b38a
Author: mxnet-ci 
AuthorDate: Tue May 7 01:18:00 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..83a30a7
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue May  7 01:17:59 UTC 2019



[GitHub] [incubator-mxnet] stephenrawls commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281430841
 
 

 ##
 File path: tests/python/gpu/test_gluon_gpu.py
 ##
 @@ -225,6 +226,54 @@ def forward(self, inpt):
 assert_allclose(net(data).asnumpy(), ref_net(data).asnumpy())
 
 
+def check_layer_bidirectional_varseqlen(size, in_size):
+class RefBiLSTMVarSeqLen(gluon.Block):
+def __init__(self, size, **kwargs):
+super(RefBiLSTMVarSeqLen, self).__init__(**kwargs)
+with self.name_scope():
+self._lstm_fwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='l0')
+self._lstm_bwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='r0')
+
+def forward(self, inpt, sequence_length):
+fwd = self._lstm_fwd(inpt)
+bwd_inpt = nd.SequenceReverse(inpt, 
sequence_length=sequence_length, use_sequence_length=True)
+bwd = self._lstm_bwd(bwd_inpt)
+bwd = nd.SequenceReverse(bwd, sequence_length=sequence_length, 
use_sequence_length=True)
+return nd.concat(fwd, bwd, dim=2)
+weights = {}
+for d in ['l', 'r']:
+weights['lstm_{}0_i2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, in_size))
+weights['lstm_{}0_h2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, size))
+weights['lstm_{}0_i2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+weights['lstm_{}0_h2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+
+net = gluon.rnn.LSTM(size, bidirectional=True, use_sequence_length=True, 
prefix='lstm_')
+ref_net = RefBiLSTMVarSeqLen(size, prefix='lstm_')
+net.initialize()
+ref_net.initialize()
+net_params = net.collect_params()
+ref_net_params = ref_net.collect_params()
+for k in weights:
+net_params[k].set_data(weights[k])
+ref_net_params[k.replace('l0', 'l0l0').replace('r0', 
'r0l0')].set_data(weights[k])
+
+
+batch_size = 10
+num_timesteps = 11
+data = mx.random.uniform(shape=(num_timesteps, batch_size, in_size))
+
+# TODO: figure out why int32 doesn't work here
+sequence_length = nd.random.randint(1, num_timesteps+1, 
shape=(batch_size)).astype("float")
+
+net_output = net(data, sequence_length=sequence_length).asnumpy()
+ref_net_output = ref_net(data, sequence_length).asnumpy()
+sequence_length_np = sequence_length.asnumpy().astype("int32")
+
+# Only compare the valid sections for each batch entry
+for b in range(batch_size):
+assert_allclose(net_output[:sequence_length_np[b], b], 
ref_net_output[:sequence_length_np[b], b])
 
 Review comment:
   Ah, good point. I am only testing the returned output, not the return state.
   
   I guess I could just loop over the batch elements one-by-one, passing them 
each in turn to the reference lstm. That way each input is correctly sized and 
I can easily grab the right return state.
   
   Should I do that now or in the follow-on PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
szha commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281429171
 
 

 ##
 File path: tests/python/gpu/test_gluon_gpu.py
 ##
 @@ -225,6 +226,54 @@ def forward(self, inpt):
 assert_allclose(net(data).asnumpy(), ref_net(data).asnumpy())
 
 
+def check_layer_bidirectional_varseqlen(size, in_size):
+class RefBiLSTMVarSeqLen(gluon.Block):
+def __init__(self, size, **kwargs):
+super(RefBiLSTMVarSeqLen, self).__init__(**kwargs)
+with self.name_scope():
+self._lstm_fwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='l0')
+self._lstm_bwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='r0')
+
+def forward(self, inpt, sequence_length):
+fwd = self._lstm_fwd(inpt)
+bwd_inpt = nd.SequenceReverse(inpt, 
sequence_length=sequence_length, use_sequence_length=True)
+bwd = self._lstm_bwd(bwd_inpt)
+bwd = nd.SequenceReverse(bwd, sequence_length=sequence_length, 
use_sequence_length=True)
+return nd.concat(fwd, bwd, dim=2)
+weights = {}
+for d in ['l', 'r']:
+weights['lstm_{}0_i2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, in_size))
+weights['lstm_{}0_h2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, size))
+weights['lstm_{}0_i2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+weights['lstm_{}0_h2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+
+net = gluon.rnn.LSTM(size, bidirectional=True, use_sequence_length=True, 
prefix='lstm_')
+ref_net = RefBiLSTMVarSeqLen(size, prefix='lstm_')
+net.initialize()
+ref_net.initialize()
+net_params = net.collect_params()
+ref_net_params = ref_net.collect_params()
+for k in weights:
+net_params[k].set_data(weights[k])
+ref_net_params[k.replace('l0', 'l0l0').replace('r0', 
'r0l0')].set_data(weights[k])
+
+
+batch_size = 10
+num_timesteps = 11
+data = mx.random.uniform(shape=(num_timesteps, batch_size, in_size))
+
+# TODO: figure out why int32 doesn't work here
+sequence_length = nd.random.randint(1, num_timesteps+1, 
shape=(batch_size)).astype("float")
+
+net_output = net(data, sequence_length=sequence_length).asnumpy()
+ref_net_output = ref_net(data, sequence_length).asnumpy()
+sequence_length_np = sequence_length.asnumpy().astype("int32")
+
+# Only compare the valid sections for each batch entry
+for b in range(batch_size):
+assert_allclose(net_output[:sequence_length_np[b], b], 
ref_net_output[:sequence_length_np[b], b])
 
 Review comment:
   I see. I mistook the use_sequence_length flag to be in rnn op. Still, 
whether the returned state is of the right step or not is not tested, which is 
also an important aspect of variable length RNN support. It may be hard to test 
it using RNN layer as reference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281426954
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -67,6 +67,7 @@ inline Context GetContext(const nnvm::NodeAttrs& attrs,
   Context ctx;
   if (inputs.size()) {
 ctx = inputs[0]->ctx();
+
 
 Review comment:
   good catch will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281426918
 
 

 ##
 File path: tests/python/gpu/test_gluon_gpu.py
 ##
 @@ -225,6 +226,54 @@ def forward(self, inpt):
 assert_allclose(net(data).asnumpy(), ref_net(data).asnumpy())
 
 
+def check_layer_bidirectional_varseqlen(size, in_size):
+class RefBiLSTMVarSeqLen(gluon.Block):
+def __init__(self, size, **kwargs):
+super(RefBiLSTMVarSeqLen, self).__init__(**kwargs)
+with self.name_scope():
+self._lstm_fwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='l0')
+self._lstm_bwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='r0')
+
+def forward(self, inpt, sequence_length):
+fwd = self._lstm_fwd(inpt)
+bwd_inpt = nd.SequenceReverse(inpt, 
sequence_length=sequence_length, use_sequence_length=True)
+bwd = self._lstm_bwd(bwd_inpt)
+bwd = nd.SequenceReverse(bwd, sequence_length=sequence_length, 
use_sequence_length=True)
+return nd.concat(fwd, bwd, dim=2)
+weights = {}
+for d in ['l', 'r']:
+weights['lstm_{}0_i2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, in_size))
+weights['lstm_{}0_h2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, size))
+weights['lstm_{}0_i2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+weights['lstm_{}0_h2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+
+net = gluon.rnn.LSTM(size, bidirectional=True, use_sequence_length=True, 
prefix='lstm_')
+ref_net = RefBiLSTMVarSeqLen(size, prefix='lstm_')
+net.initialize()
+ref_net.initialize()
+net_params = net.collect_params()
+ref_net_params = ref_net.collect_params()
+for k in weights:
+net_params[k].set_data(weights[k])
+ref_net_params[k.replace('l0', 'l0l0').replace('r0', 
'r0l0')].set_data(weights[k])
+
+
+batch_size = 10
+num_timesteps = 11
+data = mx.random.uniform(shape=(num_timesteps, batch_size, in_size))
+
+# TODO: figure out why int32 doesn't work here
+sequence_length = nd.random.randint(1, num_timesteps+1, 
shape=(batch_size)).astype("float")
+
+net_output = net(data, sequence_length=sequence_length).asnumpy()
+ref_net_output = ref_net(data, sequence_length).asnumpy()
+sequence_length_np = sequence_length.asnumpy().astype("int32")
+
+# Only compare the valid sections for each batch entry
+for b in range(batch_size):
+assert_allclose(net_output[:sequence_length_np[b], b], 
ref_net_output[:sequence_length_np[b], b])
 
 Review comment:
   The reference net is not using the sequence length feature of cudnn, because 
use_sequence_length defaults to false.
   
   The reference net is actually manually implementing the variable sequence 
length support by using two LSTMs for forward/backward direction and manually 
handling reversing them and concatenating the forward/backward directions. That 
is, it is doing it a slower way than via cudnn, but in a way we know should 
produce correct results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
szha commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281425950
 
 

 ##
 File path: src/imperative/imperative_utils.h
 ##
 @@ -67,6 +67,7 @@ inline Context GetContext(const nnvm::NodeAttrs& attrs,
   Context ctx;
   if (inputs.size()) {
 ctx = inputs[0]->ctx();
+
 
 Review comment:
   revert?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
szha commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281425773
 
 

 ##
 File path: tests/python/gpu/test_gluon_gpu.py
 ##
 @@ -225,6 +226,54 @@ def forward(self, inpt):
 assert_allclose(net(data).asnumpy(), ref_net(data).asnumpy())
 
 
+def check_layer_bidirectional_varseqlen(size, in_size):
+class RefBiLSTMVarSeqLen(gluon.Block):
+def __init__(self, size, **kwargs):
+super(RefBiLSTMVarSeqLen, self).__init__(**kwargs)
+with self.name_scope():
+self._lstm_fwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='l0')
+self._lstm_bwd = gluon.rnn.LSTM(size, bidirectional=False, 
prefix='r0')
+
+def forward(self, inpt, sequence_length):
+fwd = self._lstm_fwd(inpt)
+bwd_inpt = nd.SequenceReverse(inpt, 
sequence_length=sequence_length, use_sequence_length=True)
+bwd = self._lstm_bwd(bwd_inpt)
+bwd = nd.SequenceReverse(bwd, sequence_length=sequence_length, 
use_sequence_length=True)
+return nd.concat(fwd, bwd, dim=2)
+weights = {}
+for d in ['l', 'r']:
+weights['lstm_{}0_i2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, in_size))
+weights['lstm_{}0_h2h_weight'.format(d)] = 
mx.random.uniform(shape=(size*4, size))
+weights['lstm_{}0_i2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+weights['lstm_{}0_h2h_bias'.format(d)] = 
mx.random.uniform(shape=(size*4,))
+
+net = gluon.rnn.LSTM(size, bidirectional=True, use_sequence_length=True, 
prefix='lstm_')
+ref_net = RefBiLSTMVarSeqLen(size, prefix='lstm_')
+net.initialize()
+ref_net.initialize()
+net_params = net.collect_params()
+ref_net_params = ref_net.collect_params()
+for k in weights:
+net_params[k].set_data(weights[k])
+ref_net_params[k.replace('l0', 'l0l0').replace('r0', 
'r0l0')].set_data(weights[k])
+
+
+batch_size = 10
+num_timesteps = 11
+data = mx.random.uniform(shape=(num_timesteps, batch_size, in_size))
+
+# TODO: figure out why int32 doesn't work here
+sequence_length = nd.random.randint(1, num_timesteps+1, 
shape=(batch_size)).astype("float")
+
+net_output = net(data, sequence_length=sequence_length).asnumpy()
+ref_net_output = ref_net(data, sequence_length).asnumpy()
+sequence_length_np = sequence_length.asnumpy().astype("int32")
+
+# Only compare the valid sections for each batch entry
+for b in range(batch_size):
+assert_allclose(net_output[:sequence_length_np[b], b], 
ref_net_output[:sequence_length_np[b], b])
 
 Review comment:
   this doesn't seem to test if the length-based masking is working properly, 
because the reference implementation also relies on sequence length feature. 
consider using LSTMCell as reference instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281424953
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -28,6 +28,7 @@
 
 #define MXNET_USE_CUDNN_RNN MXNET_USE_CUDNN == 1 && CUDNN_MAJOR >= 5
 #define USE_CUDNN_LSTM_PROJ MXNET_USE_CUDNN == 1 && CUDNN_VERSION >= 7200
+#define USE_VAR_SEQ_LENGTH MXNET_USE_CUDNN == 1 && CUDNN_VERSION >= 7200
 
 Review comment:
   Fair enough. At first I added it since I was putting things that had nothing 
to do with LSTM_PROJ under this #ifdef guard.
   
   Maybe I'll just rename it to something that makes sense for both uses.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
szha commented on a change in pull request #14208: Add support for fast 
variable-length LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#discussion_r281423917
 
 

 ##
 File path: src/operator/rnn-inl.h
 ##
 @@ -28,6 +28,7 @@
 
 #define MXNET_USE_CUDNN_RNN MXNET_USE_CUDNN == 1 && CUDNN_MAJOR >= 5
 #define USE_CUDNN_LSTM_PROJ MXNET_USE_CUDNN == 1 && CUDNN_VERSION >= 7200
+#define USE_VAR_SEQ_LENGTH MXNET_USE_CUDNN == 1 && CUDNN_VERSION >= 7200
 
 Review comment:
   seems unnecessary for the duplicate


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stephenrawls commented on issue #14208: Add support for fast variable-length LSTM

2019-05-06 Thread GitBox
stephenrawls commented on issue #14208: Add support for fast variable-length 
LSTM
URL: https://github.com/apache/incubator-mxnet/pull/14208#issuecomment-489855413
 
 
   rebased against mainline (requiring a force-push), which now gets all unit 
tests passing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch numpy updated: Enable np op compat check with name prefix (#14897)

2019-05-06 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a commit to branch numpy
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/numpy by this push:
 new 2e10193  Enable np op compat check with name prefix (#14897)
2e10193 is described below

commit 2e101935a3cdc7738dffa1bde1ef5b8fa7e31fc7
Author: reminisce 
AuthorDate: Mon May 6 16:56:36 2019 -0700

Enable np op compat check with name prefix (#14897)
---
 src/c_api/c_api_common.h   | 17 -
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  3 +--
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/src/c_api/c_api_common.h b/src/c_api/c_api_common.h
index 118341d..ab1f5f7 100644
--- a/src/c_api/c_api_common.h
+++ b/src/c_api/c_api_common.h
@@ -163,10 +163,25 @@ inline void CopyAttr(const nnvm::IndexedGraph& idx,
 extern const std::vector kHiddenKeys;
 }  // namespace mxnet
 
+/*!
+ * An operator is considered as numpy compatible if it satisfies either one
+ * of the following conditions.
+ * 1. The op has the attribute mxnet::TIsNumpyCompatible> registered as True.
+ * 2. The op's name starts with the prefix _numpy_.
+ * The first condition is usually for the ops registered as internal ops, such
+ * as _np_add, _true_divide, etc. They are wrapped by some user-facing op
+ * APIs in the Python end.
+ * The second condition is for the ops registered in the backend while exposed
+ * directly to users as is, such as _numpy_sum etc.
+ */
 inline bool IsNumpyCompatOp(const nnvm::Op* op) {
   static const auto& is_np_compat =
   nnvm::Op::GetAttr("TIsNumpyCompatible");
-  return is_np_compat.get(op, false);
+  if (is_np_compat.get(op, false)) {
+return true;
+  }
+  static const std::string prefix = "_numpy_";
+  return op->name.find(prefix.c_str(), 0, prefix.size()) != std::string::npos;
 }
 
 #endif  // MXNET_C_API_C_API_COMMON_H_
diff --git a/src/operator/numpy/np_broadcast_reduce_op_value.cc 
b/src/operator/numpy/np_broadcast_reduce_op_value.cc
index 13b575a..6c81bf6 100644
--- a/src/operator/numpy/np_broadcast_reduce_op_value.cc
+++ b/src/operator/numpy/np_broadcast_reduce_op_value.cc
@@ -65,8 +65,7 @@ NNVM_REGISTER_OP(_numpy_sum)
   [](const NodeAttrs& attrs) {
 return std::vector{ResourceRequest::kTempSpace};
   })
-.set_attr("FGradient", 
ElemwiseGradUseNone{"_backward_numpy_sum"})
-.set_attr("TIsNumpyCompatible", true);
+.set_attr("FGradient", 
ElemwiseGradUseNone{"_backward_numpy_sum"});
 
 NNVM_REGISTER_OP(_backward_numpy_sum)
 .set_num_outputs(1)



[GitHub] [incubator-mxnet] reminisce merged pull request #14897: [numpy] Enable np op compat check with name prefix

2019-05-06 Thread GitBox
reminisce merged pull request #14897: [numpy] Enable np op compat check with 
name prefix
URL: https://github.com/apache/incubator-mxnet/pull/14897
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ehsanmok commented on issue #14875: MXNet to ONNX export bug

2019-05-06 Thread GitBox
ehsanmok commented on issue #14875: MXNet to ONNX export bug
URL: 
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-489826598
 
 
   Same error with ONNX 1.2.2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lanking520 commented on issue #14887: [WIP][Dependency Update] CUDA10.1 Support

2019-05-06 Thread GitBox
lanking520 commented on issue #14887: [WIP][Dependency Update] CUDA10.1 Support
URL: https://github.com/apache/incubator-mxnet/pull/14887#issuecomment-489824987
 
 
   General thoughts, do you think it is nessary for us to have some real-time 
benchmarking on the performance once we do some upgrade like this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] AnaRhisT94 commented on issue #14875: MXNet to ONNX export bug

2019-05-06 Thread GitBox
AnaRhisT94 commented on issue #14875: MXNet to ONNX export bug
URL: 
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-489823249
 
 
   Try to use ONNX 1.2.2


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to 
pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281399983
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   ok, changed to copy from build/install/include folder.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha merged pull request #14884: [Dependency Update] Upgrade cuDNN & NCCL

2019-05-06 Thread GitBox
szha merged pull request #14884: [Dependency Update] Upgrade cuDNN & NCCL
URL: https://github.com/apache/incubator-mxnet/pull/14884
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (a722db4 -> 0255dd6)

2019-05-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a722db4  [Dependency Update] Upgrade openssl to 1.1.1b (#14837)
 add 0255dd6  [Dependency Update] Upgrade cuDNN & NCCL (#14884)

No new revisions were added by this update.

Summary of changes:
 tools/setup_gpu_build_tools.sh | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-mxnet] vrakesh commented on issue #14895: [DOC] Build from source link for ubuntu is broken

2019-05-06 Thread GitBox
vrakesh commented on issue #14895: [DOC] Build from source link for ubuntu is 
broken
URL: 
https://github.com/apache/incubator-mxnet/issues/14895#issuecomment-489805687
 
 
   Looking into this, this seems to be true for all the links. since we convert 
our md docs to html for the docs website, looks like html links are hardcoded


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
szha commented on a change in pull request #14899: Add mkldnn_version.h to pip 
package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281381104
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   ```
   % ls 3rdparty/mkldnn/build/install/include
   i_malloc.hmkl_blas.hmkl_direct_blas.h
 mkl_direct_call.h mkl_direct_types.hmkl_dnn_types.h   
mkl_lapacke.h mkl_trans.h   mkl_version.h 
mkl_vml_defines.h mkl_vml_types.h   mkl_vsl_defines.h 
mkl_vsl_types.h
   mkl.h mkl_cblas.h   
mkl_direct_blas_kernels.h mkl_direct_lapack.h   mkl_dnn.h 
mkl_lapack.h  mkl_service.h mkl_types.h   
mkl_vml.h mkl_vml_functions.h   mkl_vsl.h 
mkl_vsl_functions.h
   ```
   The one in the install folder has both


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to 
pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281380164
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   I did not see the other headers are copied in my local environment. That's 
why I added separately.
   
   ```
   root@8c384f679438:/build/mxnet-build/3rdparty/mkldnn/build/include# ls
   mkldnn_version.h
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
szha commented on a change in pull request #14899: Add mkldnn_version.h to pip 
package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281378181
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   I just verified that the rest of the headers are copied there too. Let's 
just use the folder.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sandeep-krishnamurthy commented on issue #8646: Bug in FullyConnected???

2019-05-06 Thread GitBox
sandeep-krishnamurthy commented on issue #8646: Bug in FullyConnected???
URL: 
https://github.com/apache/incubator-mxnet/issues/8646#issuecomment-489787170
 
 
   Hello @xuzhm 
   Sorry for the delay in our response.
   I ran a sample to verify the above behavior
   ```
   data_0 -> fc_0  \
   data_1 -> fc_1   \ 
   data_2 -> fc_2  => sum
   data_3 -> fc_3  /
   data_4 -> fc_4 /
   ```
   Then compare the below:
   1. End to end in single module
   2. Individually calculate output of fc_0, ... fc_4 and then sum it to 
compare with above result
   
   I find the results are same. And, I am using latest MXNet (1.5.0) got with 
`pip install mxnet-mkl --pre`
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya edited a comment on issue #14568: NAG Optimizer with multi-precision support

2019-05-06 Thread GitBox
anirudhacharya edited a comment on issue #14568: NAG Optimizer with 
multi-precision support
URL: https://github.com/apache/incubator-mxnet/pull/14568#issuecomment-489775724
 
 
   @mxnet-label-bot update [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy opened a new pull request #14900: Fix warning / static function in header.

2019-05-06 Thread GitBox
larroy opened a new pull request #14900: Fix warning / static function in 
header.
URL: https://github.com/apache/incubator-mxnet/pull/14900
 
 
   ## Description ##
   Fix static function in header. Static functions should not be in headers.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14568: NAG Optimizer with multi-precision support

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14568: NAG Optimizer with multi-precision 
support
URL: https://github.com/apache/incubator-mxnet/pull/14568#issuecomment-489775724
 
 
   @mxnet-label-bot remove [pr-awaiting-review] add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
yuxihu commented on a change in pull request #14899: Add mkldnn_version.h to 
pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281357126
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   `mkldnn_version.h` is generated and placed under `build/include` folder. 
Other header files are still in this folder. So we have to copy from two 
different places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-05-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new eac47c5  Bump the publish timestamp.
eac47c5 is described below

commit eac47c582c7c4e75f3703bc5546902b77bff89b7
Author: mxnet-ci 
AuthorDate: Mon May 6 20:50:57 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e8090ba
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon May  6 20:50:57 UTC 2019



[GitHub] [incubator-mxnet] szha commented on a change in pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
szha commented on a change in pull request #14899: Add mkldnn_version.h to pip 
package
URL: https://github.com/apache/incubator-mxnet/pull/14899#discussion_r281353975
 
 

 ##
 File path: tools/pip/setup.py
 ##
 @@ -152,6 +152,8 @@ def has_ext_modules(self):
 package_data['mxnet'].append('mxnet/libmkldnn.so.0')
 shutil.copytree(os.path.join(CURRENT_DIR, 
'mxnet-build/3rdparty/mkldnn/include'),
 
 Review comment:
   I thought the whole mkldnn include folder should already have been included 
here. Should we just change this to copy from the build folder instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu commented on issue #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
yuxihu commented on issue #14899: Add mkldnn_version.h to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899#issuecomment-489769583
 
 
   @szha @pengzhao-intel @TaoLv Please help review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] yuxihu opened a new pull request #14899: Add mkldnn_version.h to pip package

2019-05-06 Thread GitBox
yuxihu opened a new pull request #14899: Add mkldnn_version.h to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14899
 
 
   Per changes in #13668, we generate a new header file `mkldnn_version.h` 
during MXNet compilation based on the template file 
[mkldnn_version.h.in](https://github.com/intel/mkl-dnn/blob/7de7e5d02bf687f971e7668963649728356e0c20/include/mkldnn_version.h.in).
 This dynamically generated header file needs to be added to the MXNet pip 
package to prevent compilation failure for building Horovod with MKLDNN enabled 
MXNet pip package.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Caenorst opened a new pull request #14898: prevent TRT_Logger to be destroyed before TRT engine

2019-05-06 Thread GitBox
Caenorst opened a new pull request #14898: prevent TRT_Logger to be destroyed 
before TRT engine
URL: https://github.com/apache/incubator-mxnet/pull/14898
 
 
   ## Description ##
   prevent TRT_Logger to be destroyed before TRT engine
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   
   ### Changes ###
   - add TRT_Logger in TRTEngineParam
   - delete TRT_Logger when TRTEngineParam
   
   ## Comments ##
   - Correct a bug that is exposed when using CUDA 10.1
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vrakesh commented on issue #14892: embedding_lookup_sparse(wide_w, ids, weights, combiner="sum") C++ api

2019-05-06 Thread GitBox
vrakesh commented on issue #14892: embedding_lookup_sparse(wide_w, ids, 
weights, combiner="sum") C++ api
URL: 
https://github.com/apache/incubator-mxnet/issues/14892#issuecomment-489767938
 
 
   @songziqin Thanks for the question, @leleamol  requesting your input on this 
question.
   
   @mxnet-label-bot  add [Question]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #14897: [numpy] Enable np op compat check with name prefix

2019-05-06 Thread GitBox
reminisce opened a new pull request #14897: [numpy] Enable np op compat check 
with name prefix
URL: https://github.com/apache/incubator-mxnet/pull/14897
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #14896: Add STL checks via -D_GLIBCXX_ASSERTIONS in debug mode

2019-05-06 Thread GitBox
larroy commented on issue #14896: Add STL checks via -D_GLIBCXX_ASSERTIONS in 
debug mode
URL: https://github.com/apache/incubator-mxnet/pull/14896#issuecomment-489744977
 
 
   @mxnet-label-bot add [Build,CMake,Make]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy opened a new pull request #14896: Add STL checks via -D_GLIBCXX_ASSERTIONS in debug mode

2019-05-06 Thread GitBox
larroy opened a new pull request #14896: Add STL checks via 
-D_GLIBCXX_ASSERTIONS in debug mode
URL: https://github.com/apache/incubator-mxnet/pull/14896
 
 
   ## Description ##
   Add checks for C++ STL containers in DEBUG builds.
   
   This important flag will check for correctness when using STL containers 
such as range checks and other situations that can cause out of bound errors 
and undefined behaviour while running in debug mode. It will make our code more 
resilient and help us catch bugs when running in Debug.
   
   It doesn't affect release builds.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vladoovtcharov commented on a change in pull request #14252: Allow clearing gpu cache

2019-05-06 Thread GitBox
vladoovtcharov commented on a change in pull request #14252: Allow clearing gpu 
cache
URL: https://github.com/apache/incubator-mxnet/pull/14252#discussion_r281321701
 
 

 ##
 File path: python/mxnet/context.py
 ##
 @@ -145,6 +145,11 @@ def default_ctx(cls, val):
 cls._default_ctx.value = val
 #pylint: enable=no-self-argument
 
+def release_all(self):
 
 Review comment:
   Good point, yes I'll add some documentation to try and make it clear it 
won't release all memory, just unreferenced data. The equivalent in pytorch is 
called empty_cache so maybe that would be a better name to use


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-05-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 372a957  Bump the publish timestamp.
372a957 is described below

commit 372a95734161cca0ee4707d41efa6db2c38b0984
Author: mxnet-ci 
AuthorDate: Mon May 6 19:17:57 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..b188463
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Mon May  6 19:17:57 UTC 2019



[GitHub] [incubator-mxnet] SSE4 commented on a change in pull request #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from conan

2019-05-06 Thread GitBox
SSE4 commented on a change in pull request #13400: [MXNET-1229] use OpenBLAS, 
lapack & OpenCV from conan
URL: https://github.com/apache/incubator-mxnet/pull/13400#discussion_r281309796
 
 

 ##
 File path: conanfile.py
 ##
 @@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from conans import ConanFile
+
+class IncubatorMXNetConan(ConanFile):
+settings = "os", "compiler", "build_type", "arch"
+requires = "openblas/0.2.20@conan/stable", "opencv/3.4.3@conan/stable", 
"lapack/3.7.1@conan/stable"
 
 Review comment:
   good, one dependency less - even better


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] SSE4 commented on issue #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from conan

2019-05-06 Thread GitBox
SSE4 commented on issue #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from 
conan
URL: https://github.com/apache/incubator-mxnet/pull/13400#issuecomment-489730514
 
 
   @szha yes, why not


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha commented on a change in pull request #13400: [MXNET-1229] use OpenBLAS, lapack & OpenCV from conan

2019-05-06 Thread GitBox
szha commented on a change in pull request #13400: [MXNET-1229] use OpenBLAS, 
lapack & OpenCV from conan
URL: https://github.com/apache/incubator-mxnet/pull/13400#discussion_r281302935
 
 

 ##
 File path: conanfile.py
 ##
 @@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+from conans import ConanFile
+
+class IncubatorMXNetConan(ConanFile):
+settings = "os", "compiler", "build_type", "arch"
+requires = "openblas/0.2.20@conan/stable", "opencv/3.4.3@conan/stable", 
"lapack/3.7.1@conan/stable"
 
 Review comment:
   openblas already provides lapacke implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] shadogray closed issue #14075: Generation of R reference manual fails

2019-05-06 Thread GitBox
shadogray closed issue #14075: Generation of R reference manual fails 
URL: https://github.com/apache/incubator-mxnet/issues/14075
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] canerturkmen commented on issue #14683: Adding loss operator of a Hawkes self-exciting process

2019-05-06 Thread GitBox
canerturkmen commented on issue #14683: Adding loss operator of a Hawkes 
self-exciting process
URL: https://github.com/apache/incubator-mxnet/pull/14683#issuecomment-489717370
 
 
   @eric-haibin-lin thanks for your comment Haibin! I think this version should 
do it (with more verbose indexing of the inputs and outputs).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] canerturkmen commented on issue #14683: Adding loss operator of a Hawkes self-exciting process

2019-05-06 Thread GitBox
canerturkmen commented on issue #14683: Adding loss operator of a Hawkes 
self-exciting process
URL: https://github.com/apache/incubator-mxnet/pull/14683#issuecomment-489717123
 
 
   > @canerturkmen Thanks for the contribution, could you take a look at the 
test failure?
   
   Thanks for the comment @roywei. I understand it was an unrelated random fail 
of another op's test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] shadogray closed pull request #14078: [MXNET-14075] repair example, enable fault tolerant build of R reference manual

2019-05-06 Thread GitBox
shadogray closed pull request #14078: [MXNET-14075] repair example, enable 
fault tolerant build of R reference manual
URL: https://github.com/apache/incubator-mxnet/pull/14078
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on a change in pull request #14887: [Dependency Update] CUDA10.1 Support

2019-05-06 Thread GitBox
stu1130 commented on a change in pull request #14887: [Dependency Update] 
CUDA10.1 Support
URL: https://github.com/apache/incubator-mxnet/pull/14887#discussion_r281287031
 
 

 ##
 File path: tools/setup_gpu_build_tools.sh
 ##
 @@ -85,7 +91,31 @@ if [[ $VARIANT == cu* ]]; then
 fi
 
 # list of debs to download from nvidia
-if [[ $VARIANT == cu100* ]]; then
+if [[ $VARIANT == cu101* ]]; then
+cuda_files=( \
 
 Review comment:
   Thanks @szha I will rerun the benchmark with latest build script


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #14895: [DOC] Build from source link for ubuntu is broken

2019-05-06 Thread GitBox
mxnet-label-bot commented on issue #14895: [DOC] Build from source link for 
ubuntu is broken
URL: 
https://github.com/apache/incubator-mxnet/issues/14895#issuecomment-489710221
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest opened a new issue #14895: [DOC] Build from source link for ubuntu is broken

2019-05-06 Thread GitBox
apeforest opened a new issue #14895: [DOC] Build from source link for ubuntu is 
broken
URL: https://github.com/apache/incubator-mxnet/issues/14895
 
 
   On this page: 
https://github.com/apache/incubator-mxnet/blob/master/docs/install/build_from_source.md
   
   If click the ubuntu hyperlink, it is broken:
   
https://github.com/apache/incubator-mxnet/blob/master/docs/install/ubuntu_setup.html
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ehsanmok commented on issue #14875: MXNet to ONNX export bug

2019-05-06 Thread GitBox
ehsanmok commented on issue #14875: MXNet to ONNX export bug
URL: 
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-489708835
 
 
   @AnaRhisT94 No, my ONNX is already the latest v1.5.0. It's when calling 
[export_model](https://mxnet.apache.org/api/python/contrib/onnx.html?highlight=onnx#module-mxnet.contrib.onnx.mx2onnx.export_model)
 causes that to happen. `int(None)` is never valid.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14885: [Fit-API] Adress PR comments

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14885: [Fit-API] Adress PR comments
URL: https://github.com/apache/incubator-mxnet/pull/14885#issuecomment-489700399
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14879: Add cpu forward implementation for Deformable Convolution

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14879: Add cpu forward implementation for 
Deformable Convolution
URL: https://github.com/apache/incubator-mxnet/pull/14879#issuecomment-489700484
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14882: empty list cannot be cleared issue fixed.

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14882: empty list cannot be cleared issue 
fixed.
URL: https://github.com/apache/incubator-mxnet/pull/14882#issuecomment-489700461
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14877: Fix the incorrect MKLDNN/MKL logic in cmake

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14877: Fix the incorrect MKLDNN/MKL logic in 
cmake 
URL: https://github.com/apache/incubator-mxnet/pull/14877#issuecomment-489700509
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14874: [MXNET-1399] multiclass-mcc metric enhancements

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14874: [MXNET-1399] multiclass-mcc metric 
enhancements
URL: https://github.com/apache/incubator-mxnet/pull/14874#issuecomment-489700546
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14891: [Doc] Add MKL-DNN operator list

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14891: [Doc] Add MKL-DNN operator list
URL: https://github.com/apache/incubator-mxnet/pull/14891#issuecomment-489700031
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14884: [Dependency Update] Upgrade cuDNN & NCCL

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14884: [Dependency Update] Upgrade cuDNN & 
NCCL
URL: https://github.com/apache/incubator-mxnet/pull/14884#issuecomment-489700443
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14894: Accelerate ROIPooling layer

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14894: Accelerate ROIPooling layer
URL: https://github.com/apache/incubator-mxnet/pull/14894#issuecomment-489700582
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14886: Add cpu implementation for Deformable PSROIPooling

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14886: Add cpu implementation for Deformable 
PSROIPooling
URL: https://github.com/apache/incubator-mxnet/pull/14886#issuecomment-489700284
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14887: [Dependency Update] CUDA10.1 Support

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14887: [Dependency Update] CUDA10.1 Support
URL: https://github.com/apache/incubator-mxnet/pull/14887#issuecomment-489700266
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14889: fix add_n bug: when input mem overlap with output mem, results is wrong

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14889: fix add_n bug: when input mem overlap 
with output mem, results is wrong
URL: https://github.com/apache/incubator-mxnet/pull/14889#issuecomment-489700137
 
 
   @mxnet-label-bot add [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudhacharya commented on issue #14893: Integrating the MKL VML functions to MXNET to speed-up the (element-wised) mathematic computation

2019-05-06 Thread GitBox
anirudhacharya commented on issue #14893: Integrating the MKL VML functions to 
MXNET to speed-up the (element-wised) mathematic computation
URL: https://github.com/apache/incubator-mxnet/pull/14893#issuecomment-489699981
 
 
   @mxnet-label-bot add [pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (42ede50 -> a722db4)

2019-05-06 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 42ede50  rewrite test_custom_op_exc (#14878)
 add a722db4  [Dependency Update] Upgrade openssl to 1.1.1b (#14837)

No new revisions were added by this update.

Summary of changes:
 tools/dependencies/openssl.sh  | 2 +-
 tools/staticbuild/build_lib.sh | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)



[GitHub] [incubator-mxnet] lanking520 merged pull request #14837: [Dependency Update] Upgrade openssl to 1.1.1b

2019-05-06 Thread GitBox
lanking520 merged pull request #14837: [Dependency Update] Upgrade openssl to 
1.1.1b
URL: https://github.com/apache/incubator-mxnet/pull/14837
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy opened a new pull request #14894: Accelerate ROIPooling layer

2019-05-06 Thread GitBox
arcadiaphy opened a new pull request #14894: Accelerate ROIPooling layer
URL: https://github.com/apache/incubator-mxnet/pull/14894
 
 
   ## Description ##
   As title. Two major changes in implementation:
   
   1. Use int to store max_idx,  avoiding possible value change when casting 
int to float.
   2. Store global index of input data blob in max_idx to simplify grad 
accumulation in backward pass.
   
   ### Benchmarking script ###
   ```
   import mxnet as mx
   import numpy as np
   np.random.seed(0)
   import time
   
   batch_size = 4
   channel = 16
   height = 256
   width = 256
   n_rois = 500
   pooled_size = (5, 5)
   
   def test(ctx, tries=5):
   s = 0.
   for idx in xrange(tries + 1):
   x = np.random.random((batch_size, channel, height, width))
   y = np.zeros((n_rois, 5))
   y[:, 0] += np.floor(np.random.random((n_rois)) * batch_size)
   y[:, 1] += np.random.random((n_rois)) * width
   y[:, 2] += np.random.random((n_rois)) * height
   y[:, 3] += np.minimum(y[:, 1] + np.random.random((n_rois)) * width, 
width)
   y[:, 4] += np.minimum(y[:, 2] + np.random.random((n_rois)) * height, 
height)
   
   with ctx:
   x = mx.nd.array(x)
   y = mx.nd.array(y)
   mx.nd.waitall()
   start = time.time()
   with mx.autograd.record():
   x.attach_grad()
   r = mx.nd.ROIPooling(data=x, rois=y,
spatial_scale=1,
pooled_size=pooled_size)
   r.backward(mx.nd.ones_like(r))
   mx.nd.waitall()
   if idx > 0:
   s += time.time() - start
   print 'time: {} s on {}'.format(s / tries, ctx)
   
   if __name__ == '__main__':
   test(mx.cpu())
   test(mx.gpu())
   ```
   
   ### Result ###
   Before:
   ```
   time: 13.2959038258 s on cpu(0)
   time: 0.0346892356873 s on gpu(0)
   ```
   
   After:
   ```
   time: 0.6734582901 s on cpu(0)
   time: 0.0038733959198 s on gpu(0)
   ```
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >