[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 5f2efa9  Bump the publish timestamp.
5f2efa9 is described below

commit 5f2efa95f330e5286f039bc88b56030701d8c8af
Author: mxnet-ci 
AuthorDate: Fri Oct 4 06:45:12 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..a350099
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Oct  4 06:45:12 UTC 2019



[incubator-mxnet] branch master updated: boolean_mask_assign operator for future boolean indexing (#16361)

2019-10-03 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 916fbf2  boolean_mask_assign operator for future boolean indexing 
(#16361)
916fbf2 is described below

commit 916fbf23f82b422bc22c5813d02f03d4657de220
Author: Hao Jin 
AuthorDate: Thu Oct 3 22:47:22 2019 -0700

boolean_mask_assign operator for future boolean indexing (#16361)
---
 src/operator/numpy/np_boolean_mask_assign.cc | 270 +++
 src/operator/numpy/np_boolean_mask_assign.cu | 229 +++
 src/operator/numpy/np_broadcast_reduce_op.h  |   9 +
 tests/python/unittest/test_numpy_op.py   |  36 
 4 files changed, 544 insertions(+)

diff --git a/src/operator/numpy/np_boolean_mask_assign.cc 
b/src/operator/numpy/np_boolean_mask_assign.cc
new file mode 100644
index 000..2a5ae11
--- /dev/null
+++ b/src/operator/numpy/np_boolean_mask_assign.cc
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file np_boolean_assign.cc
+ * \brief CPU implementation of Boolean Mask Assign
+ */
+
+#include "../contrib/boolean_mask-inl.h"
+
+namespace mxnet {
+namespace op {
+
+template
+struct BooleanAssignCPUKernel {
+ private:
+  static size_t bin_search(const size_t* idx,
+   const size_t idx_size,
+   const size_t i) {
+size_t left = 0, right = idx_size, mid = (left + right) / 2;
+while (left != right) {
+  if (idx[mid] == i + 1) {
+if (idx[mid - 1] == i) {
+  mid -= 1;
+  break;
+} else if (idx[mid - 1] == i + 1) {
+  right = mid;
+  mid = (left + right) / 2;
+}
+  } else if (idx[mid] == i) {
+if (idx[mid + 1] == i + 1) {
+  break;
+} else {
+  left = mid;
+  mid = (left + right + 1) / 2;
+}
+  } else if (idx[mid] < i + 1) {
+left = mid;
+mid = (left + right + 1) / 2;
+  } else if (idx[mid] > i + 1) {
+right = mid;
+mid = (left + right) / 2;
+  }
+}
+return mid;
+  }
+
+ public:
+  template
+  static void Map(int i,
+  DType* data,
+  const size_t* idx,
+  const size_t idx_size,
+  const size_t leading,
+  const size_t middle,
+  const size_t trailing,
+  const DType val) {
+// binary search for the turning point
+size_t mid = bin_search(idx, idx_size, i);
+// final answer is in mid
+for (size_t l = 0; l < leading; ++l) {
+  for (size_t t = 0; t < trailing; ++t) {
+data[(l * middle + mid) * trailing + t] = val;
+  }
+}
+  }
+
+  template
+  static void Map(int i,
+  DType* data,
+  const size_t* idx,
+  const size_t idx_size,
+  const size_t leading,
+  const size_t middle,
+  const size_t trailing,
+  DType* tensor) {
+// binary search for the turning point
+size_t mid = bin_search(idx, idx_size, i);
+// final answer is in mid
+for (size_t l = 0; l < leading; ++l) {
+  for (size_t t = 0; t < trailing; ++t) {
+data[(l * middle + mid) * trailing + t] = (scalar) ? tensor[0] : 
tensor[i];
+  }
+}
+  }
+};
+
+bool BooleanAssignShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector *in_attrs,
+mxnet::ShapeVector *out_attrs) {
+  CHECK(in_attrs->size() == 2U || in_attrs->size() == 3U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  const TShape& dshape = in_attrs->at(0);
+
+  // mask should have the same shape as the input
+  SHAPE_ASSIGN_CHECK(*in_attrs, 1, dshape);
+
+  // check if output shape is the same as the input data
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, dshape);
+
+  // for tensor version, the tensor should have less than 1 dimension
+  if (in_attrs->size() == 3U) {
+CHECK_LE(in_attrs->at(2).ndim(), 1U)
+  << "boolean array indexing assignment requires a 0 or 1-dime

[GitHub] [incubator-mxnet] reminisce merged pull request #16361: npi.boolean_mask_assign_(scalar, tensor) operator for future boolean indexing

2019-10-03 Thread GitBox
reminisce merged pull request #16361: npi.boolean_mask_assign_(scalar, tensor) 
operator for future boolean indexing
URL: https://github.com/apache/incubator-mxnet/pull/16361
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #15921: dynamic custom operator support

2019-10-03 Thread GitBox
wkcn commented on a change in pull request #15921: dynamic custom operator 
support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r331326421
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,756 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
+/*!
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+enum MXReturnValue {
+  MX_FAIL = 0,
+  MX_SUCCESS = 1,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() : data(NULL) {}
+
+  MXTensor(void *data, const std::vector &shape, MXDType dtype)
+  : data(data), shape(shape), dtype(dtype) {}
+
+  /*! \brief helper function to cast data pointer */
+  template
+  inline data_type* getData() {
 
 Review comment:
   Agree : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] igolan opened a new pull request #16373: Round and sign straight-through-estimators C operators.

2019-10-03 Thread GitBox
igolan opened a new pull request #16373: Round and sign 
straight-through-estimators C operators.
URL: https://github.com/apache/incubator-mxnet/pull/16373
 
 
   ## Description ##
   Implemented sign and round straight-through-estimators operators in C.
   Straight-through-estimators have derivative of 1 everywhere instead of 0 
everywhere, this is required for quantized training.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are not affected by this change.
   
   ### Changes ###
   - [x] contrib.round_ste() including test and API doc
   - [x] contrib.sign_ste() including test and API doc
   
   ## Comments ##
   N/A


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (09ae7df -> 626fc32)

2019-10-03 Thread reminisce
This is an automated email from the ASF dual-hosted git repository.

reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 09ae7df  remove redundant branch name (#16372)
 add 626fc32  Disable Pylint false error in numpy_op_signature  (#16370)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/numpy_op_signature.py | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)



[GitHub] [incubator-mxnet] reminisce merged pull request #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
reminisce merged pull request #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] iblis17 commented on issue #16363: Julia: add API docs back

2019-10-03 Thread GitBox
iblis17 commented on issue #16363: Julia: add API docs back
URL: https://github.com/apache/incubator-mxnet/pull/16363#issuecomment-538198075
 
 
   I guess it can be written as a Julia expression that output a list.
   Just `ls` that dir.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16257: [Numpy] add numpy op bitwise_xor, hsplit, moveaxis, rot90

2019-10-03 Thread GitBox
sxjscience commented on a change in pull request #16257: [Numpy]  add numpy op 
bitwise_xor, hsplit, moveaxis, rot90
URL: https://github.com/apache/incubator-mxnet/pull/16257#discussion_r331311180
 
 

 ##
 File path: src/operator/numpy/np_matrix_op.cc
 ##
 @@ -612,5 +614,215 @@ NNVM_REGISTER_OP(_backward_npi_flip)
 })
 .set_attr("FCompute", NumpyFlipForward);
 
+bool NumpyMoveaxisShape(const nnvm::NodeAttrs& attrs,
+mxnet::ShapeVector *in_attrs,
+mxnet::ShapeVector *out_attrs) {
+  const NumpyMoveaxisParam& param = 
nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), 1U);
+  CHECK_EQ(out_attrs->size(), 1U);
+  mxnet::TShape& shp = (*in_attrs)[0];
+  CHECK_LE(shp.ndim(), 6) << "Transpose support at most 6 dimensions";
+  CHECK_EQ(param.source.ndim(), param.destination.ndim())
+<< "source and destination not equal.";
+  mxnet::TShape ret(shp.ndim(), -1);
+  mxnet::TShape axes(shp.ndim(), -1);
+  std::vector state_axes(shp.ndim(), false);
+  mxnet::TShape real_src(param.source.ndim(), -1);
+  mxnet::TShape real_des(param.destination.ndim(), -1);
+  for (int i = 0; i < param.source.ndim(); ++i) {
+if (param.source[i] >= 0) {
+  CHECK_LT(static_cast(param.source[i]), shp.ndim());
+  real_src[i] = param.source[i];
+} else {
+  CHECK_LT(param.source[i] + shp.ndim(), shp.ndim());
+  real_src[i] = param.source[i] + shp.ndim();
+}
+if (param.destination[i] >= 0) {
+  CHECK_LT(static_cast(param.destination[i]), shp.ndim());
+  real_des[i] = param.destination[i];
+} else {
+  CHECK_LT(param.destination[i] + shp.ndim(), shp.ndim());
+  real_des[i] = param.destination[i] + shp.ndim();
+}
+  }
+  if (shp.ndim() > 1) {
+for (int i = 0; i < param.source.ndim() - 1; ++i) {
+  for (int j = i + 1; j < param.source.ndim(); ++j) {
+CHECK_NE(real_src[i], real_src[j])
+  << "repeated axis in `source` argument";
+CHECK_NE(real_des[i], real_des[j])
+  << "repeated axis in `destination` argument";
+  }
+}
+  }
+  for (int i = 0; i < param.source.ndim(); ++i) {
+axes[real_des[i]] = real_src[i];
+state_axes[real_src[i]] = true;
+  }
+  for (int i = 0; i < axes.ndim(); ++i) {
+if (axes[i] < 0) {
+  for (int j = 0; j < axes.ndim(); ++j) {
+if (state_axes[j] == false) {
+  axes[i] = j;
+  state_axes[j] = true;
+  break;
+}
+  }
+}
+  }
 
 Review comment:
   @gyshi Would you take a look at the issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
marcoabreu commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538187050
 
 
   Okay, sounds good, thanks.
   
   But like I thought, it's the python 3 lint that complained.
   
   I'll leave it to you discretion, but you might want to consider adding a 
conditional import which will only do it for python 2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new f0f7b4c  Bump the publish timestamp.
f0f7b4c is described below

commit f0f7b4c7a0fe8d082344a0b73ecd46810f5d18af
Author: mxnet-ci 
AuthorDate: Fri Oct 4 00:39:44 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e2703ef
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Oct  4 00:39:44 UTC 2019



[GitHub] [incubator-mxnet] sxjscience commented on issue #12048: reshape_like does not support different input types

2019-10-03 Thread GitBox
sxjscience commented on issue #12048: reshape_like does not support different 
input types
URL: 
https://github.com/apache/incubator-mxnet/issues/12048#issuecomment-538181342
 
 
   To revise it, we need to change the FInferType 
https://github.com/apache/incubator-mxnet/blob/09ae7dfe9cb559dd6fa4996d491ee125e3e0b9e7/src/operator/tensor/elemwise_unary_op_basic.cc#L532-L536
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #12048: reshape_like does not support different input types

2019-10-03 Thread GitBox
sxjscience edited a comment on issue #12048: reshape_like does not support 
different input types
URL: 
https://github.com/apache/incubator-mxnet/issues/12048#issuecomment-538181342
 
 
   To support it, we need to change the FInferType 
https://github.com/apache/incubator-mxnet/blob/09ae7dfe9cb559dd6fa4996d491ee125e3e0b9e7/src/operator/tensor/elemwise_unary_op_basic.cc#L532-L536
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit edited a comment on issue #16173: Saving and loading cudNN 
autotune and graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537932809
 
 
   Thank you for the feedback @KellenSunderland.
   I think the mini-batchsize should be included as a caching specification as 
well because optimization techniques like TensorRT depend on it.
   
   A different approach would be to define a save and load function for the 
[Executor 
class](https://github.com/apache/incubator-mxnet/blob/992c3c0dd90c0723de6934e826a49bad6569eeac/include/mxnet/executor.h#L53).
   The memory file of an executor handle would contain all additional platform 
specific definitions and optimization results. This would allow the user to run 
the full binding process once on a specific platform and later the option to 
bind it much quicker:
   
   ```python
   # mxnet/executor.py
   def save(filename_exec):
   """Saves the executor handle including specific optimization of the graph.
   Must be run after the executor handle was binded: `model.bind()`.
   
   Parameters
   --
   filename : str
   Path to the executor file (e.g. "executor.exec").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```
   
   In order to preferably avoid an additional copy of the model parameters, one 
needs to specify the `.params` and `.symbol` filepath when loading the executor 
handle. This would also enable to update the model parameters independently 
from the optimization cache:
   
   ```python
   # mxnet/executor.py
   def load(filename_exec, filename_symbol, filename_params):
   """Loads and binds the executor handle.
   
   Parameters
   --
   filename_exec : str
   Path to the executor file (e.g. "executor.exec").
   filename_symbol : str
   Path to the model architecture definition (e.g. "model.symbol").
   filename_params : str
   Path to the model weights (e.g. "model.params").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
reminisce commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538176570
 
 
   Another proof for why `absolute_import` is required in python2. If that were 
deleted, and you typed `import numpy` in that file which was expected to import 
the official numpy package, however, it would just give you `mxnet.numpy` 
because it did not look up in the top-level packages.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
stu1130 edited a comment on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538175688
 
 
   My 
[PR](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fsanity/detail/PR-16335/8/pipeline)
 failed on this pylint check. Let me rebase it and trigger CI again
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
stu1130 commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538175688
 
 
   My 
[PR](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fsanity/detail/PR-16335/8/pipeline)
 failed on this pylint check
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy edited a comment on issue #13484: flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker

2019-10-03 Thread GitBox
larroy edited a comment on issue #13484: flaky test 
test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
URL: 
https://github.com/apache/incubator-mxnet/issues/13484#issuecomment-538175213
 
 
   Any suggestions? is it reproducible?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #13484: flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker

2019-10-03 Thread GitBox
larroy commented on issue #13484: flaky test 
test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
URL: 
https://github.com/apache/incubator-mxnet/issues/13484#issuecomment-538175213
 
 
   Any suggestions?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
marcoabreu commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538174374
 
 
   Could you link a CI run where the pylint failed by any chance? So far, I 
haven't found an issue where this problem is documented.
   
   I'm thinking that it might depend on the underlying Python version being 
used. Maybe the Python2 pylint says that its fine while the Python3 pylint 
complains because that's the default behaviour already. If that assumption is 
true, a conditional import for python 2 only (which we would then deprecate 
soon) would resolve that.
   
   (I understand that it might seem like some hassle for a warning, just trying 
to understand the background here)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
reminisce commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538172017
 
 
   @marcoabreu Could you explain why it didn't error out for many PRs after 
`numpy_op_signature.py` was introduced? The pylint behavior is flaky at least.
   
Adding `absolute_import` explicitly enforces consistent importing behavior 
between python2 and python3 throughout the current file without relying on the 
assumption that this is done somewhere else. So I don't feel comfortable about 
deleting it to just make pylint happy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #15921: dynamic custom operator support

2019-10-03 Thread GitBox
rondogency commented on a change in pull request #15921: dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r331297763
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,756 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
+/*!
+ * \brief External Tensor data types
+ */
+enum MXDType {
+  kFloat32 = 0,
+  kFloat64 = 1,
+  kFloat16 = 2,
+  kUint8 = 3,
+  kInt32 = 4,
+  kInt8  = 5,
+  kInt64 = 6,
+};
+
+enum MXReturnValue {
+  MX_FAIL = 0,
+  MX_SUCCESS = 1,
+};
+
+/*!
+ * \brief External Tensor data structure
+ */
+struct MXTensor {
+  MXTensor() : data(NULL) {}
+
+  MXTensor(void *data, const std::vector &shape, MXDType dtype)
+  : data(data), shape(shape), dtype(dtype) {}
+
+  /*! \brief helper function to cast data pointer */
+  template
+  inline data_type* getData() {
 
 Review comment:
   I think it should be user's job to figure out what data type to cast given 
tensor dtype info. The mshadow macro or similar mechanism might be too complex 
for the custom op, especially it can be done by a few lines of hard-coded 
types. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b6f3235 -> 09ae7df)

2019-10-03 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b6f3235  Fix nightly scala pipeline (#16362)
 add 09ae7df  remove redundant branch name (#16372)

No new revisions were added by this update.

Summary of changes:
 ci/Jenkinsfile_utils.groovy | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [incubator-mxnet] marcoabreu merged pull request #16372: Remove redundant branch name

2019-10-03 Thread GitBox
marcoabreu merged pull request #16372: Remove redundant branch name
URL: https://github.com/apache/incubator-mxnet/pull/16372
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16370: Disable Pylint false error in numpy_op_signature

2019-10-03 Thread GitBox
marcoabreu commented on issue #16370: Disable Pylint false error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538165716
 
 
   As far as I understand, the absolute_import behaviour could already be the 
default in newer python versions according to 
https://www.python.org/dev/peps/pep-0328/ 
   
   Could that mean that pylint reports it as duplicate because it's already 
done on a system level? What happens if you remove it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
stu1130 edited a comment on issue #16370: Disable Pylint error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538160965
 
 
   @marcoabreu yes based on the following error message
   ```
   python/mxnet/numpy_op_signature.py:20:0: W0404: Reimport 'absolute_import' 
(imported line 20) (reimported)
   ```
   reimport and import are reported at the same line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
stu1130 commented on issue #16370: Disable Pylint error in numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538160965
 
 
   @marcoabreu yes based on the following error message
   ```
   python/mxnet/numpy_op_signature.py:20:0: W0404: Reimport 'absolute_import' 
(imported line 20) (reimported)
   ```
   reimport and import are report at the same line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #13484: flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker

2019-10-03 Thread GitBox
marcoabreu commented on issue #13484: flaky test 
test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
URL: 
https://github.com/apache/incubator-mxnet/issues/13484#issuecomment-538157046
 
 
   120 seconds is quite some time. Considering everything is happening on local 
volume, it's quite unlikely that the disk is so occupied. Could something be 
stuck? I think it's worth investigating.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zheng-da opened a new issue #13484: flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker

2019-10-03 Thread GitBox
zheng-da opened a new issue #13484: flaky test 
test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
URL: https://github.com/apache/incubator-mxnet/issues/13484
 
 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-13418/11/pipeline
   
   ```
   ==
   
   ERROR: test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
   
   --
   
   Traceback (most recent call last):
   
 File "C:\Anaconda3\envs\py2\lib\site-packages\nose\case.py", line 197, in 
runTest
   
   self.test(*self.arg)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@3\tests\python\unittest\common.py", 
line 173, in test_new
   
   orig_test(*args, **kwargs)
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@3\tests\python\unittest\test_gluon_data.py",
 line 86, in test_recordimage_dataset_with_data_loader_multiworker
   
   for i, (x, y) in enumerate(loader):
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@3\windows_package\python\mxnet\gluon\data\dataloader.py",
 line 279, in next
   
   return self.__next__()
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@3\windows_package\python\mxnet\gluon\data\dataloader.py",
 line 267, in __next__
   
   self.shutdown()
   
 File 
"C:\jenkins_slave\workspace\ut-python-cpu@3\windows_package\python\mxnet\gluon\data\dataloader.py",
 line 298, in shutdown
   
   w.terminate()
   
 File "C:\Anaconda3\envs\py2\lib\multiprocessing\process.py", line 137, in 
terminate
   
   self._popen.terminate()
   
 File "C:\Anaconda3\envs\py2\lib\multiprocessing\forking.py", line 312, in 
terminate
   
   _subprocess.TerminateProcess(int(self._handle), TERMINATE)
   
   WindowsError: [Error 5] Access is denied
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu commented on issue #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
marcoabreu commented on issue #16370: Disable Pylint error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538156474
 
 
   Sorry I don't understand. Do you mean this is a bug in pylint?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16266: Numpy-compatible histogram

2019-10-03 Thread GitBox
haojin2 commented on a change in pull request #16266: Numpy-compatible histogram
URL: https://github.com/apache/incubator-mxnet/pull/16266#discussion_r331260601
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -612,6 +612,53 @@ def tensordot(a, b, axes=2):
 return _npi.tensordot(a, b, a_axes_summed, b_axes_summed)
 
 
+@set_module('mxnet.ndarray.numpy')
+def histogram(a, bins=10, range=None, normed=None, weights=None, 
density=None):  # pylint: disable=too-many-arguments
+"""
+Compute the histogram of a set of data.
+
+Parameters
+--
+a : NDArray
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator floor_divide

2019-10-03 Thread GitBox
Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator 
floor_divide
URL: https://github.com/apache/incubator-mxnet/pull/16151#discussion_r330814829
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,113 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes
+
+def compute_floor_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
+if dtype in ['float16', 'float32', 'float64']:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: tvm.floor(A[index] / B[index]), name='C')
+else:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: tvm.if_then_else(B[index] == 0, 
tvm.const(0, dtype),
+
tvm.floor(A[index].astype('float64') /
+  
B[index].astype('float64')).astype(dtype)), name='C')
 
 Review comment:
   Why are we upcasting `tvm.floor`'s input `dtype` to `float64` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator floor_divide

2019-10-03 Thread GitBox
Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator 
floor_divide
URL: https://github.com/apache/incubator-mxnet/pull/16151#discussion_r330812275
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,113 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes
+
+def compute_floor_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
 
 Review comment:
   Do we need to consider `A` and `B` have different `dtype` here?  
`numpy.floor_divide` can take two different `dtype` input with no error. But 
`tvm.floor` will return type not match error.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator floor_divide

2019-10-03 Thread GitBox
Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator 
floor_divide
URL: https://github.com/apache/incubator-mxnet/pull/16151#discussion_r331259767
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -1958,6 +1961,46 @@ def get_grad_right(a1, a2):
 assert_almost_equal(x.grad.asnumpy(), x_grad, rtol=rtol, atol=atol)
 
 
+@with_seed()
+@use_np
+def test_np_floor_divide():
+if _features.is_enabled("TVM_OP"):
+class TestFloorDivide(HybridBlock):
+def __init__(self):
+super(TestFloorDivide, self).__init__()
+
+def hybrid_forward(self, F, x1, x2):
+return F.np.floor_divide(x1, x2)
+
+types = ['float64', 'float32', 'int64', 'int32', 'int8', 'uint8']
 
 Review comment:
   We can also add a `fp16` here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator floor_divide

2019-10-03 Thread GitBox
Laurawly commented on a change in pull request #16151: [numpy] [tvm] operator 
floor_divide
URL: https://github.com/apache/incubator-mxnet/pull/16151#discussion_r331255293
 
 

 ##
 File path: contrib/tvmop/core/umath.py
 ##
 @@ -0,0 +1,113 @@
+ # Licensed to the Apache Software Foundation (ASF) under one
+ # or more contributor license agreements.  See the NOTICE file
+ # distributed with this work for additional information
+ # regarding copyright ownership.  The ASF licenses this file
+ # to you under the Apache License, Version 2.0 (the
+ # "License"); you may not use this file except in compliance
+ # with the License.  You may obtain a copy of the License at
+ #
+ #   http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing,
+ # software distributed under the License is distributed on an
+ # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ # KIND, either express or implied.  See the License for the
+ # specific language governing permissions and limitations
+ # under the License.
+import tvm
+from .. import defop, AllTypes
+
+def compute_floor_divide(dtype, ndim):
+A = tvm.placeholder([tvm.var() for _ in range(ndim)], name='A', 
dtype=dtype)
+B = tvm.placeholder([tvm.var() for _ in range(ndim)], name='B', 
dtype=dtype)
+if dtype in ['float16', 'float32', 'float64']:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: tvm.floor(A[index] / B[index]), name='C')
+else:
+C = tvm.compute([tvm.var() for _ in range(ndim)],
+lambda *index: tvm.if_then_else(B[index] == 0, 
tvm.const(0, dtype),
+
tvm.floor(A[index].astype('float64') /
+  
B[index].astype('float64')).astype(dtype)), name='C')
+s = tvm.create_schedule(C.op)
+return s, A, B, C
+
+@defop(name="floor_divide", target="cpu", auto_broadcast=True,
+   dtype=AllTypes, ndim=list(range(6)))
+def floor_divide(dtype, ndim):
+s, A, B, C = compute_floor_divide(dtype, ndim)
+axes = [axis for axis in C.op.axis]
+fused = s[C].fuse(*axes)
+s[C].parallel(fused)
+return s, [A, B, C]
+
+@defop(name="cuda_floor_divide", target="cuda", auto_broadcast=True,
+   dtype=AllTypes, ndim=list(range(6)))
+def floor_divide_gpu(dtype, ndim):
+s, A, B, C = compute_floor_divide(dtype, ndim)
+axes = [axis for axis in C.op.axis]
+fused = s[C].fuse(*axes)
+bx, tx = s[C].split(fused, factor=64)
+s[C].bind(bx, tvm.thread_axis("blockIdx.x"))
+s[C].bind(tx, tvm.thread_axis("threadIdx.x"))
+return s, [A, B, C]
+
+#  r represents the position of tensor
 
 Review comment:
   You can replace `r` with other name like `pos` or `tensor_pos` to remove 
this comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
reminisce commented on issue #16370: Disable Pylint error in numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538107499
 
 
   @marcoabreu This is a pylint error not a real error of the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331227310
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/performance/backend/mkldnn/index.rst
 ##
 @@ -32,9 +32,15 @@ Intel MKL-DNN
 
   How to perform quantization with MKLDNN
 
+   .. card::
+  :title: Connectionist Temporal Classification
+  :link: speech_recognition/ctc.html
 
 Review comment:
   That's a weird merge artifact... I'll remove it.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331227032
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/symbol/data.md
 ##
 @@ -0,0 +1,474 @@
+
+
+
+
 
 Review comment:
   I don't disagree. Let's do that separately though. Other people might still 
be using that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331225724
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/hybrid.md
 ##
 @@ -0,0 +1,266 @@
+
+
 
 Review comment:
   All the links of this one are outdated, I'd say let's remove it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331224993
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -184,39 +216,57 @@ Advanced Topics
 
.. card::
   :title: Custom Layers
-  :link: custom-layer.html
+  :link: blocks/custom-layer.html
 
   A guide to implementing custom layers.
 
.. card::
   :title: Custom Operators
-  :link: 
https://mxnet.apache.org/versions/master/tutorials/gluon/customop.html
+  :link: ../../extend/customop.html
 
   Building custom operators with numpy.
 
.. card::
   :title: Custom Loss
-  :link: custom-loss/custom-loss.html
+  :link: loss/custom-loss.html
 
   A guide to implementing custom losses.
 
.. card::
   :title: Gotchas using NumPy in Apache MXNet
-  :link: 
https://mxnet.apache.org/versions/master/tutorials/gluon/gotchas_numpy_in_mxnet.html
+  :link: gotchas_numpy_in_mxnet.html
 
 Review comment:
   ```suggestion
 :link: ../ndarray/gotchas_numpy_in_mxnet.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331223529
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -125,7 +131,7 @@ Training
 
.. card::
   :title: Loss Functions
-  :link: loss/loss.html
+  :link: loss.html
 
 Review comment:
   ```suggestion
 :link: loss/loss.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #16372: Remove redundant branch name

2019-10-03 Thread GitBox
ChaiBapchya edited a comment on issue #16372: Remove redundant branch name
URL: https://github.com/apache/incubator-mxnet/pull/16372#issuecomment-538100980
 
 
   @marcoabreu Verified by running it on dev


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16372: Remove redundant branch name

2019-10-03 Thread GitBox
ChaiBapchya commented on issue #16372: Remove redundant branch name
URL: https://github.com/apache/incubator-mxnet/pull/16372#issuecomment-538100980
 
 
   @marcoabreu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16372: Remove redundant branch name

2019-10-03 Thread GitBox
ChaiBapchya opened a new pull request #16372: Remove redundant branch name
URL: https://github.com/apache/incubator-mxnet/pull/16372
 
 
   ## Description ##
   Title
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331218761
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -184,39 +216,57 @@ Advanced Topics
 
.. card::
   :title: Custom Layers
-  :link: custom-layer.html
+  :link: custom_layer.html
 
   A guide to implementing custom layers.
 
.. card::
   :title: Custom Operators
-  :link: 
https://mxnet.apache.org/versions/master/tutorials/gluon/customop.html
+  :link: customop.html
 
   Building custom operators with numpy.
 
.. card::
   :title: Custom Loss
-  :link: custom-loss/custom-loss.html
+  :link: custom_loss/custom_loss.html
 
 Review comment:
   ```suggestion
 :link: loss/custom-loss.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331218539
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -184,39 +216,57 @@ Advanced Topics
 
.. card::
   :title: Custom Layers
-  :link: custom-layer.html
+  :link: custom_layer.html
 
   A guide to implementing custom layers.
 
.. card::
   :title: Custom Operators
-  :link: 
https://mxnet.apache.org/versions/master/tutorials/gluon/customop.html
+  :link: customop.html
 
 Review comment:
   ```suggestion
 :link: ../../extend/customop.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331218261
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -184,39 +216,57 @@ Advanced Topics
 
.. card::
   :title: Custom Layers
-  :link: custom-layer.html
+  :link: custom_layer.html
 
 Review comment:
   ```suggestion
 :link: blocks/custom-layer.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
ChaiBapchya commented on issue #16370: Disable Pylint error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370#issuecomment-538087514
 
 
   Curious to know how this reimport, import and absolute import thingy works 
on pylint. Stackoverflow search didnt quite give me a good idea!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331205973
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -78,7 +78,7 @@ Data
 
.. card::
   :title: Image Augmentation
-  :link: image-augmentation.html
+  :link: image_augmentation.html
 
 Review comment:
   ```suggestion
 :link: image/image-augmentation.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331201897
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -90,7 +90,7 @@ Data
 
.. card::
   :title: Gluon Datasets and DataLoader
-  :link: data/datasets.html
+  :link: datasets.html
 
 Review comment:
   ```suggestion
 :link: datasets/datasets.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #16371: [WIP]Adding large tensor support and test for gather_nd

2019-10-03 Thread GitBox
access2rohit commented on issue #16371: [WIP]Adding large tensor support and 
test for gather_nd
URL: https://github.com/apache/incubator-mxnet/pull/16371#issuecomment-538083918
 
 
   @mxnet-label-bot add [pr-work-in-progress]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit opened a new pull request #16371: [WIP]Adding large tensor support and test for gather_nd

2019-10-03 Thread GitBox
access2rohit opened a new pull request #16371: [WIP]Adding large tensor support 
and test for gather_nd
URL: https://github.com/apache/incubator-mxnet/pull/16371
 
 
   ## Description ##
   changed the operator code to use `index_t` instead of `int`.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Testing ##
   Currently running full test suite. Will update once done.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331201897
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -90,7 +90,7 @@ Data
 
.. card::
   :title: Gluon Datasets and DataLoader
-  :link: data/datasets.html
+  :link: datasets.html
 
 Review comment:
   ```suggestion
 :link: data/datasets.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331201591
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/index.rst
 ##
 @@ -25,7 +25,7 @@ Getting started
 
.. card::
   :title: A 60-minute Gluon crash course
-  :link: ../../getting-started/crash-course/index.html
+  :link: ../../crash-course/index.html
 
 Review comment:
   ```suggestion
 :link: ../../getting-started/crash-course/index.html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331200488
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/hybrid.md
 ##
 @@ -0,0 +1,266 @@
+
+
 
 Review comment:
   They're similarly named but the content seems quite different to me. Maybe 
same topic, but different...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 opened a new pull request #16370: Disable Pylint error in numpy_op_signature

2019-10-03 Thread GitBox
stu1130 opened a new pull request #16370: Disable Pylint error in 
numpy_op_signature 
URL: https://github.com/apache/incubator-mxnet/pull/16370
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 43c06be  Bump the publish timestamp.
43c06be is described below

commit 43c06be9d55f75fcfe6dcd052bce053c6e4e0206
Author: mxnet-ci 
AuthorDate: Thu Oct 3 18:42:05 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..af1c147
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Oct  3 18:42:05 UTC 2019



[incubator-mxnet] branch revert-16304-cifix created (now eff50d5)

2019-10-03 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to branch revert-16304-cifix
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at eff50d5  Revert "add mkl installation temp fix (#16304)"

No new revisions were added by this update.



[GitHub] [incubator-mxnet] lanking520 opened a new pull request #16369: Revert "add mkl installation temp fix"

2019-10-03 Thread GitBox
lanking520 opened a new pull request #16369: Revert "add mkl installation temp 
fix"
URL: https://github.com/apache/incubator-mxnet/pull/16369
 
 
   Reverts apache/incubator-mxnet#16304


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331180278
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/packages/gluon/learning_rate_schedules.md
 ##
 @@ -0,0 +1,345 @@
+
+
+
 
 Review comment:
   already there: 
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/training/learning_rates/learning_rate_schedules.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331172502
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/packages/gluon/custom_loss/custom_loss.md
 ##
 @@ -0,0 +1,232 @@
+
 
 Review comment:
   This tutorial is already there


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331181333
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/symbol/data.md
 ##
 @@ -0,0 +1,474 @@
+
+
+
+
 
 Review comment:
   I think we should get rid of Symbol tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331179921
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/hybrid.md
 ##
 @@ -0,0 +1,266 @@
+
+
 
 Review comment:
   This tutorial is already there: 
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331181590
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/performance/backend/mkldnn/index.rst
 ##
 @@ -32,9 +32,15 @@ Intel MKL-DNN
 
   How to perform quantization with MKLDNN
 
+   .. card::
+  :title: Connectionist Temporal Classification
+  :link: speech_recognition/ctc.html
 
 Review comment:
   that has nothing to do with mkldnn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331179583
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/custom_layer.md
 ##
 @@ -0,0 +1,127 @@
+
+
 
 Review comment:
   this tutorial is already there: 
https://mxnet.apache.org/api/python/docs/tutorials/extend/custom_layer.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331173309
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/packages/gluon/custom_loss/custom_loss.md
 ##
 @@ -0,0 +1,232 @@
+
 
 Review comment:
   
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/loss/custom-loss.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331180158
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/packages/gluon/learning_rate_finder.md
 ##
 @@ -0,0 +1,332 @@
+
+
+
 
 Review comment:
   already there 
https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/training/learning_rates/learning_rate_finder.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331181160
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/gluon/transforms.md
 ##
 @@ -0,0 +1,173 @@
+
+
+
 
 Review comment:
   All the links in this tutorial need to be updated, I'll take care of it in 
#16368


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331181465
 
 

 ##
 File path: docs/python_docs/python/tutorials/packages/symbol/symbol.md
 ##
 @@ -0,0 +1,446 @@
+
+
+
+
+
+
+
+
 
 Review comment:
   let's get rid of symbol tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on a change in pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
ThomasDelteil commented on a change in pull request #16366: add back missing 
tutorials; fix links
URL: https://github.com/apache/incubator-mxnet/pull/16366#discussion_r331173205
 
 

 ##
 File path: 
docs/python_docs/python/tutorials/packages/gluon/control_flow/ControlFlowTutorial.md
 ##
 @@ -0,0 +1,405 @@
+
 
 Review comment:
   can you rename this tutorial to follow the current convention ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on a change in pull request #15921: dynamic custom operator support

2019-10-03 Thread GitBox
rondogency commented on a change in pull request #15921: dynamic custom 
operator support
URL: https://github.com/apache/incubator-mxnet/pull/15921#discussion_r331181231
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -18,33 +18,756 @@
  */
 
 /*!
- * Copyright (c) 2015 by Contributors
+ * Copyright (c) 2019 by Contributors
  * \file lib_api.h
  * \brief APIs to interact with libraries
+ * This API specifies function prototypes to
+ * register custom ops for library authors
  */
+
 #ifndef MXNET_LIB_API_H_
 #define MXNET_LIB_API_H_
 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define MX_LIBRARY_VERSION 1
+
+/*!
+ * \brief External Tensor data types
+ */
+enum MXDType {
 
 Review comment:
   this is consistent with mshadow, not dlpack


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil opened a new pull request #16368: Adding back the data tutorials and fixing card links

2019-10-03 Thread GitBox
ThomasDelteil opened a new pull request #16368: Adding back the data tutorials 
and fixing card links
URL: https://github.com/apache/incubator-mxnet/pull/16368
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on issue #16367: Flaky test test_operator_gpu.test_preloaded_multi_sgd

2019-10-03 Thread GitBox
rondogency commented on issue #16367: Flaky test 
test_operator_gpu.test_preloaded_multi_sgd
URL: 
https://github.com/apache/incubator-mxnet/issues/16367#issuecomment-538058589
 
 
   Did you merge the latest master? A fix for it has been merged yesterday 
https://github.com/apache/incubator-mxnet/pull/16356


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16022: [MXNET-1421] Added (CuDNN)BatchNorm operator to the list of mirrored operators

2019-10-03 Thread GitBox
eric-haibin-lin commented on issue #16022: [MXNET-1421] Added (CuDNN)BatchNorm 
operator to the list of mirrored operators
URL: https://github.com/apache/incubator-mxnet/pull/16022#issuecomment-538042053
 
 
   @antinucleon 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #16367: Flaky test test_operator_gpu.test_preloaded_multi_sgd

2019-10-03 Thread GitBox
larroy commented on issue #16367: Flaky test 
test_operator_gpu.test_preloaded_multi_sgd
URL: 
https://github.com/apache/incubator-mxnet/issues/16367#issuecomment-538030647
 
 
   @mxnet-label-bot add [Test, Flaky]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy opened a new issue #16367: Flaky test test_operator_gpu.test_preloaded_multi_sgd

2019-10-03 Thread GitBox
larroy opened a new issue #16367: Flaky test 
test_operator_gpu.test_preloaded_multi_sgd
URL: https://github.com/apache/incubator-mxnet/issues/16367
 
 
   See recent builds, this test is failing on windows gpu
   
   
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fwindows-cpu/detail/PR-16253/3/pipeline/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16367: Flaky test test_operator_gpu.test_preloaded_multi_sgd

2019-10-03 Thread GitBox
mxnet-label-bot commented on issue #16367: Flaky test 
test_operator_gpu.test_preloaded_multi_sgd
URL: 
https://github.com/apache/incubator-mxnet/issues/16367#issuecomment-538030575
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Test, Flaky


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] larroy commented on issue #13484: flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker

2019-10-03 Thread GitBox
larroy commented on issue #13484: flaky test 
test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker
URL: 
https://github.com/apache/incubator-mxnet/issues/13484#issuecomment-538029443
 
 
   This looks more like a IO stall than a bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham opened a new pull request #16366: add back missing tutorials; fix links

2019-10-03 Thread GitBox
aaronmarkham opened a new pull request #16366: add back missing tutorials; fix 
links
URL: https://github.com/apache/incubator-mxnet/pull/16366
 
 
   ## Description ##
   Fixing up some missing tutorials and/or broken links.
   I added back some things that seemed missing, but they might have just been 
renamed/relocated. I figured it would be better to fix the links/content now, 
then dedupe if that comes up.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16355: Embedding gradient performance optimization on GPU

2019-10-03 Thread GitBox
ptrendx commented on a change in pull request #16355: Embedding gradient 
performance optimization on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16355#discussion_r331135482
 
 

 ##
 File path: src/operator/tensor/indexing_op.cu
 ##
 @@ -545,6 +545,247 @@ void TakeOpForward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+namespace {
+  /*
+  * \brief returns integer log2(a) rounded up
+  */
+  inline int ilog2(unsigned int a) {
+int k = 1;
+while (a >>= 1) k++;
+return k;
+  }
+}
+
+/*
+ * \brief finds the lower and upper-bound positions of each unique element 
within a sorted input array
+ * \param sorted_data input elements previously sorted
+ * \param bounds output containing all lower-bound followed by all upper-bound 
positions
+ * \param data_dim total number of elements in the input array
+ * \param vocab_dim maximum number of unique elements
+ */
+template 
+__global__ void EmbeddingFindBounds(const IType *sorted_data,
+IType *bounds,
+const index_t data_dim,
+const index_t vocab_dim) {
+  const index_t id = blockIdx.x * blockDim.x + threadIdx.x;
+
+  // Binary search to find lower bound: stored at bounds[0..vocab_dim-1]
+  IType lower_bound = 0;
+  IType upper_bound = data_dim - 1;
+  IType mean;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound) / 2;
+if (id <= sorted_data[mean])
+  upper_bound = mean;
+else
+  lower_bound = mean + 1;
+  }
+  bool found_row = (sorted_data[lower_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[id] = (found_row) ? lower_bound : -1;
+  }
+
+  // Binary search to find upper bound: stored at 
bounds[vocab_dim..2*vocab_dim-1]
+  lower_bound = 0;
+  upper_bound = data_dim - 1;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound + 1) / 2;
+if (id >= sorted_data[mean])
+  lower_bound = mean;
+else
+  upper_bound = mean - 1;
+  }
+  found_row = (sorted_data[upper_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[vocab_dim + id] = (found_row) ? upper_bound : -1;
+  }
+}
+
+/*
+ * \brief kernel to compute gradient of EmbeddingOp
+ * \param grad_in input gradient data
+ * \param original_index reference to the position at original input data for 
each index
+ * \param index_bounds lower and upper-bounds positions of each unique index
+ * \param grad_out output gradient data
+ * \param embbedding_dim dimension of the dense embedding
+ * \param vocab_dim maximum number of unique indices in the data array: tokens 
vocabulary size
+ * \param rows_per_block number of grad_in rows to be computed by each block
+ * \param req write/add/null
+ */
+template 
+__global__ void EmbeddingGradKernel(DType *grad_in,
+  const IType *original_index,
+  const IType *index_bounds,
+  const DType *grad_out,
+  const index_t embbedding_dim,
+  const index_t vocab_dim,
+  const int rows_per_block,
+  const int req) {
+  extern __shared__ int sharedmem[];
+  LType* grad_in_row =  reinterpret_cast(sharedmem);
+
+  // LType has to be bigger than DType, guarded in the launcher code
+  const int n_val = sizeof(DType) < sizeof(LType) ? sizeof(LType) / 
sizeof(DType) : 1;
+  const LType *aligned_grad_out = reinterpret_cast(grad_out);
+  LType *aligned_grad_in = reinterpret_cast(grad_in);
+  const index_t aligned_emb_dim = embbedding_dim / n_val;
+  DType *my_grad_in_row = reinterpret_cast(&grad_in_row[threadIdx.x]);
+  LType Lvalue[1];
+  DType* Dvalues = reinterpret_cast(Lvalue);
+
+  for (index_t row=0; row < rows_per_block; ++row) {
+IType my_row = blockIdx.x * rows_per_block + row;
+if (my_row < vocab_dim) {
+  // Read lower and upper bounds for current row
+  IType lower_bound = index_bounds[my_row];
+  IType upper_bound = index_bounds[vocab_dim + my_row];
+  int nOccurrences = (lower_bound != -1) ? (upper_bound - lower_bound + 1) 
: 0;
+
+  for (index_t emb_id=threadIdx.x; emb_id < aligned_emb_dim; emb_id += 
blockDim.x) {
+// Initialize grad_in
+if (req == kAddTo) {
+  grad_in_row[threadIdx.x] = aligned_grad_in[my_row * aligned_emb_dim 
+ emb_id];
+} else {
+  grad_in_row[threadIdx.x] = 0.0;
+}
+// Add all rows from grad_out according to indices in data
+if (nOccurrences) {
+  for (index_t data_idx=lower_bound; data_idx < (lower_bound + 
nOccurrences); ++data_idx) {
+*Lvalue = aligned_grad_out[original_index[data_idx] * 
aligned_emb_dim + emb_id];
+for (index_t val_id = 0; val_id < n_val; val_id++) {
+  my_grad_in_row[val_id] += Dvalues[val_id];
+}
+  }
+}

[GitHub] [incubator-mxnet] samskalicky edited a comment on issue #14728: [MXNET-1386] fix for shape mismatch

2019-10-03 Thread GitBox
samskalicky edited a comment on issue #14728: [MXNET-1386] fix for shape 
mismatch
URL: https://github.com/apache/incubator-mxnet/pull/14728#issuecomment-538021245
 
 
   @pengzhao-intel  Im getting a weird failure for the MKL test_subgraph.py 
test, but all the other tests are passing. Heres one of the failing tests (from 
the unix-cpu job)
   
   ```
   ==
   ERROR: test_subgraph.test_pos_conv_add2
   --
   Traceback (most recent call last):
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   self.test(*self.arg)
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   orig_test(*args, **kwargs)
 File "/work/mxnet/tests/python/mkl/test_subgraph.py", line 735, in 
test_pos_conv_add2
   check_fusion(net, data_shape, attrs)
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   orig_test(*args, **kwargs)
 File "/work/mxnet/tests/python/mkl/test_subgraph.py", line 272, in 
check_fusion
   assert_almost_equal(exe.outputs[i].asnumpy(), 
exe_sg.outputs[i].asnumpy(), rtol=1e-3, atol=1e-1)
 File "/work/mxnet/python/mxnet/ndarray/ndarray.py", line 2504, in asnumpy
   ctypes.c_size_t(data.size)))
 File "/work/mxnet/python/mxnet/base.py", line 254, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: std::exception
    >> begin captured logging << 
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=927032378 to reproduce.
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=22356853 to reproduce.
   - >> end captured logging << -
   ```
   
   Can someone from the Intel team help debug?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on issue #14728: [MXNET-1386] fix for shape mismatch

2019-10-03 Thread GitBox
samskalicky commented on issue #14728: [MXNET-1386] fix for shape mismatch
URL: https://github.com/apache/incubator-mxnet/pull/14728#issuecomment-538021245
 
 
   @PatricZhao Im getting a weird failure for the MKL test_subgraph.py test, 
but all the other tests are passing. Heres one of the failing tests (from the 
unix-cpu job)
   
   ```
   ==
   ERROR: test_subgraph.test_pos_conv_add2
   --
   Traceback (most recent call last):
 File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in 
runTest
   self.test(*self.arg)
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   orig_test(*args, **kwargs)
 File "/work/mxnet/tests/python/mkl/test_subgraph.py", line 735, in 
test_pos_conv_add2
   check_fusion(net, data_shape, attrs)
 File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in 
test_new
   orig_test(*args, **kwargs)
 File "/work/mxnet/tests/python/mkl/test_subgraph.py", line 272, in 
check_fusion
   assert_almost_equal(exe.outputs[i].asnumpy(), 
exe_sg.outputs[i].asnumpy(), rtol=1e-3, atol=1e-1)
 File "/work/mxnet/python/mxnet/ndarray/ndarray.py", line 2504, in asnumpy
   ctypes.c_size_t(data.size)))
 File "/work/mxnet/python/mxnet/base.py", line 254, in check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: std::exception
    >> begin captured logging << 
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=927032378 to reproduce.
   common: INFO: Setting test np/mx/python random seeds, use 
MXNET_TEST_SEED=22356853 to reproduce.
   - >> end captured logging << -
   ```
   
   Can someone from the Intel team help debug?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16355: Embedding gradient performance optimization on GPU

2019-10-03 Thread GitBox
ptrendx commented on a change in pull request #16355: Embedding gradient 
performance optimization on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16355#discussion_r331134292
 
 

 ##
 File path: src/operator/tensor/indexing_op.cu
 ##
 @@ -545,6 +545,247 @@ void TakeOpForward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+namespace {
+  /*
+  * \brief returns integer log2(a) rounded up
+  */
+  inline int ilog2(unsigned int a) {
+int k = 1;
+while (a >>= 1) k++;
+return k;
+  }
+}
+
+/*
+ * \brief finds the lower and upper-bound positions of each unique element 
within a sorted input array
+ * \param sorted_data input elements previously sorted
+ * \param bounds output containing all lower-bound followed by all upper-bound 
positions
+ * \param data_dim total number of elements in the input array
+ * \param vocab_dim maximum number of unique elements
+ */
+template 
+__global__ void EmbeddingFindBounds(const IType *sorted_data,
+IType *bounds,
+const index_t data_dim,
+const index_t vocab_dim) {
+  const index_t id = blockIdx.x * blockDim.x + threadIdx.x;
+
+  // Binary search to find lower bound: stored at bounds[0..vocab_dim-1]
+  IType lower_bound = 0;
+  IType upper_bound = data_dim - 1;
+  IType mean;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound) / 2;
+if (id <= sorted_data[mean])
+  upper_bound = mean;
+else
+  lower_bound = mean + 1;
+  }
+  bool found_row = (sorted_data[lower_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[id] = (found_row) ? lower_bound : -1;
+  }
+
+  // Binary search to find upper bound: stored at 
bounds[vocab_dim..2*vocab_dim-1]
+  lower_bound = 0;
+  upper_bound = data_dim - 1;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound + 1) / 2;
+if (id >= sorted_data[mean])
+  lower_bound = mean;
+else
+  upper_bound = mean - 1;
+  }
+  found_row = (sorted_data[upper_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[vocab_dim + id] = (found_row) ? upper_bound : -1;
+  }
+}
+
+/*
+ * \brief kernel to compute gradient of EmbeddingOp
+ * \param grad_in input gradient data
+ * \param original_index reference to the position at original input data for 
each index
+ * \param index_bounds lower and upper-bounds positions of each unique index
+ * \param grad_out output gradient data
+ * \param embbedding_dim dimension of the dense embedding
+ * \param vocab_dim maximum number of unique indices in the data array: tokens 
vocabulary size
+ * \param rows_per_block number of grad_in rows to be computed by each block
+ * \param req write/add/null
+ */
+template 
+__global__ void EmbeddingGradKernel(DType *grad_in,
+  const IType *original_index,
+  const IType *index_bounds,
+  const DType *grad_out,
+  const index_t embbedding_dim,
+  const index_t vocab_dim,
+  const int rows_per_block,
+  const int req) {
+  extern __shared__ int sharedmem[];
+  LType* grad_in_row =  reinterpret_cast(sharedmem);
+
+  // LType has to be bigger than DType, guarded in the launcher code
+  const int n_val = sizeof(DType) < sizeof(LType) ? sizeof(LType) / 
sizeof(DType) : 1;
+  const LType *aligned_grad_out = reinterpret_cast(grad_out);
+  LType *aligned_grad_in = reinterpret_cast(grad_in);
+  const index_t aligned_emb_dim = embbedding_dim / n_val;
+  DType *my_grad_in_row = reinterpret_cast(&grad_in_row[threadIdx.x]);
+  LType Lvalue[1];
+  DType* Dvalues = reinterpret_cast(Lvalue);
+
+  for (index_t row=0; row < rows_per_block; ++row) {
+IType my_row = blockIdx.x * rows_per_block + row;
+if (my_row < vocab_dim) {
+  // Read lower and upper bounds for current row
+  IType lower_bound = index_bounds[my_row];
+  IType upper_bound = index_bounds[vocab_dim + my_row];
+  int nOccurrences = (lower_bound != -1) ? (upper_bound - lower_bound + 1) 
: 0;
+
+  for (index_t emb_id=threadIdx.x; emb_id < aligned_emb_dim; emb_id += 
blockDim.x) {
+// Initialize grad_in
+if (req == kAddTo) {
+  grad_in_row[threadIdx.x] = aligned_grad_in[my_row * aligned_emb_dim 
+ emb_id];
+} else {
+  grad_in_row[threadIdx.x] = 0.0;
+}
+// Add all rows from grad_out according to indices in data
+if (nOccurrences) {
 
 Review comment:
   Does this help? It should be already handled the same way by the for loop.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16355: Embedding gradient performance optimization on GPU

2019-10-03 Thread GitBox
ptrendx commented on a change in pull request #16355: Embedding gradient 
performance optimization on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16355#discussion_r331133783
 
 

 ##
 File path: src/operator/tensor/indexing_op.cu
 ##
 @@ -545,6 +545,247 @@ void TakeOpForward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+namespace {
+  /*
+  * \brief returns integer log2(a) rounded up
+  */
+  inline int ilog2(unsigned int a) {
+int k = 1;
+while (a >>= 1) k++;
+return k;
+  }
+}
+
+/*
+ * \brief finds the lower and upper-bound positions of each unique element 
within a sorted input array
+ * \param sorted_data input elements previously sorted
+ * \param bounds output containing all lower-bound followed by all upper-bound 
positions
+ * \param data_dim total number of elements in the input array
+ * \param vocab_dim maximum number of unique elements
+ */
+template 
+__global__ void EmbeddingFindBounds(const IType *sorted_data,
+IType *bounds,
+const index_t data_dim,
+const index_t vocab_dim) {
+  const index_t id = blockIdx.x * blockDim.x + threadIdx.x;
+
+  // Binary search to find lower bound: stored at bounds[0..vocab_dim-1]
+  IType lower_bound = 0;
+  IType upper_bound = data_dim - 1;
+  IType mean;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound) / 2;
+if (id <= sorted_data[mean])
+  upper_bound = mean;
+else
+  lower_bound = mean + 1;
+  }
+  bool found_row = (sorted_data[lower_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[id] = (found_row) ? lower_bound : -1;
+  }
+
+  // Binary search to find upper bound: stored at 
bounds[vocab_dim..2*vocab_dim-1]
+  lower_bound = 0;
+  upper_bound = data_dim - 1;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound + 1) / 2;
+if (id >= sorted_data[mean])
+  lower_bound = mean;
+else
+  upper_bound = mean - 1;
+  }
+  found_row = (sorted_data[upper_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[vocab_dim + id] = (found_row) ? upper_bound : -1;
+  }
+}
+
+/*
+ * \brief kernel to compute gradient of EmbeddingOp
+ * \param grad_in input gradient data
+ * \param original_index reference to the position at original input data for 
each index
+ * \param index_bounds lower and upper-bounds positions of each unique index
+ * \param grad_out output gradient data
+ * \param embbedding_dim dimension of the dense embedding
+ * \param vocab_dim maximum number of unique indices in the data array: tokens 
vocabulary size
+ * \param rows_per_block number of grad_in rows to be computed by each block
+ * \param req write/add/null
+ */
+template 
+__global__ void EmbeddingGradKernel(DType *grad_in,
+  const IType *original_index,
+  const IType *index_bounds,
+  const DType *grad_out,
+  const index_t embbedding_dim,
+  const index_t vocab_dim,
+  const int rows_per_block,
+  const int req) {
+  extern __shared__ int sharedmem[];
+  LType* grad_in_row =  reinterpret_cast(sharedmem);
+
+  // LType has to be bigger than DType, guarded in the launcher code
+  const int n_val = sizeof(DType) < sizeof(LType) ? sizeof(LType) / 
sizeof(DType) : 1;
+  const LType *aligned_grad_out = reinterpret_cast(grad_out);
+  LType *aligned_grad_in = reinterpret_cast(grad_in);
+  const index_t aligned_emb_dim = embbedding_dim / n_val;
+  DType *my_grad_in_row = reinterpret_cast(&grad_in_row[threadIdx.x]);
+  LType Lvalue[1];
+  DType* Dvalues = reinterpret_cast(Lvalue);
+
+  for (index_t row=0; row < rows_per_block; ++row) {
+IType my_row = blockIdx.x * rows_per_block + row;
+if (my_row < vocab_dim) {
+  // Read lower and upper bounds for current row
+  IType lower_bound = index_bounds[my_row];
+  IType upper_bound = index_bounds[vocab_dim + my_row];
+  int nOccurrences = (lower_bound != -1) ? (upper_bound - lower_bound + 1) 
: 0;
 
 Review comment:
   Super small hack and probably not worth it, but if you set `upper_bound` to 
-2 if you did not find the lower bound, then you would not need this 
conditional here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16355: Embedding gradient performance optimization on GPU

2019-10-03 Thread GitBox
ptrendx commented on a change in pull request #16355: Embedding gradient 
performance optimization on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16355#discussion_r331133434
 
 

 ##
 File path: src/operator/tensor/indexing_op.cu
 ##
 @@ -545,6 +545,247 @@ void TakeOpForward(const nnvm::NodeAttrs& attrs,
   });
 }
 
+namespace {
+  /*
+  * \brief returns integer log2(a) rounded up
+  */
+  inline int ilog2(unsigned int a) {
+int k = 1;
+while (a >>= 1) k++;
+return k;
+  }
+}
+
+/*
+ * \brief finds the lower and upper-bound positions of each unique element 
within a sorted input array
+ * \param sorted_data input elements previously sorted
+ * \param bounds output containing all lower-bound followed by all upper-bound 
positions
+ * \param data_dim total number of elements in the input array
+ * \param vocab_dim maximum number of unique elements
+ */
+template 
+__global__ void EmbeddingFindBounds(const IType *sorted_data,
+IType *bounds,
+const index_t data_dim,
+const index_t vocab_dim) {
+  const index_t id = blockIdx.x * blockDim.x + threadIdx.x;
+
+  // Binary search to find lower bound: stored at bounds[0..vocab_dim-1]
+  IType lower_bound = 0;
+  IType upper_bound = data_dim - 1;
+  IType mean;
+  while (lower_bound < upper_bound) {
+mean = (lower_bound + upper_bound) / 2;
+if (id <= sorted_data[mean])
+  upper_bound = mean;
+else
+  lower_bound = mean + 1;
+  }
+  bool found_row = (sorted_data[lower_bound] == id);
+
+  if (id < vocab_dim) {
+bounds[id] = (found_row) ? lower_bound : -1;
 
 Review comment:
   Maybe early exit? How long is this kernel?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #15589: [Discussion] 1.6.0 Roadmap

2019-10-03 Thread GitBox
anirudh2290 commented on issue #15589: [Discussion] 1.6.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-538017324
 
 
   I am working on an interface for multi threaded inference in MXNet and it 
would be great if it could go in 1.6.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16300: intel-mkl package failing to install during ubuntu builds

2019-10-03 Thread GitBox
TaoLv commented on issue #16300: intel-mkl package failing to install during 
ubuntu builds
URL: 
https://github.com/apache/incubator-mxnet/issues/16300#issuecomment-537988728
 
 
   
   Sorry I’m on holiday and cannot verify it with mobile phone. But I was told 
that it was already resolved. Could you please reinstall the key and try the 
steps in 
https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-apt-repo
 again?
   
   发自我的iPhone
   
   在 2019年10月3日,上午1:31,Lanking  写道:
   
   
   
   @TaoLv I saw someone is trying to solve this 
problem, but still seemed not working. Is there an ETA for this? 
https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/821365
   
   —
   You are receiving this because you were mentioned.
   Reply to this email directly, view it on 
GitHub,
 or mute the 
thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (3244a7a -> b6f3235)

2019-10-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3244a7a  Julia: add API docs back (#16363)
 add b6f3235  Fix nightly scala pipeline (#16362)

No new revisions were added by this update.

Summary of changes:
 ci/docker/Dockerfile.publish.ubuntu1604_cpu | 2 ++
 ci/docker/Dockerfile.publish.ubuntu1604_gpu | 2 ++
 2 files changed, 4 insertions(+)



[GitHub] [incubator-mxnet] aaronmarkham merged pull request #16362: Fix nightly scala pipeline

2019-10-03 Thread GitBox
aaronmarkham merged pull request #16362: Fix nightly scala pipeline
URL: https://github.com/apache/incubator-mxnet/pull/16362
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537972862
 
 
   @chinakook This is an option, but could violate the ability to deploy the 
same model across different platforms (e.g. CPU, CPU-MKLDNN, GPU-CUDA, 
GPU-TensorRT).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chinakook commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
chinakook commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537967622
 
 
   Maybe we can save the optimization states in params directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit edited a comment on issue #16173: Saving and loading cudNN 
autotune and graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537934625
 
 
   Regarding the export of a TensorRT executor handle (@Caenorst, @haohuanw),
   the [ONNX-TensorRT repository](https://github.com/onnx/onnx-tensorrt) 
provides an executable to generate an TensorRT engine file from an ONNX-model:
   
   ```
   onnx2trt my_model.onnx -o my_engine.trt
   ```
   Alternatively, one can use the the C++-API instead:
   
   ```
   NvOnnxParser.h
   NvOnnxParserTypedefs.h
   ```
   
   Later the engine file can be reloaded from memory:
   Here is an example python code for this using code fragements from 
https://github.com/onnx/onnx-tensorrt/issues/180 and 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py.
 
   Unfortunately, I haven't found an example in C++ for this yet:
   
   
   ```python
   import pycuda.autoinit
   import pycuda.driver as cuda
   import tensorrt as trt
   import numpy as np
   
   trt_engine_path = 'my_engine.trt'
   # initialize
   TRT_LOGGER = trt.Logger(trt.Logger.INFO)
   trt.init_libnvinfer_plugins(TRT_LOGGER, '')
   runtime = trt.Runtime(TRT_LOGGER)
   
   # https://github.com/onnx/onnx-tensorrt/issues/180
   def allocate_buffers(engine):
   """
   Allocates all buffers required for the specified engine
   """
   inputs = []
   outputs = []
   bindings = []
   # Iterate over binding names in engine
   for binding in engine:
   # Get binding (tensor/buffer) size
   size = trt.volume(engine.get_binding_shape(binding)) * 
engine.max_batch_size
   # Get binding (tensor/buffer) data type (numpy-equivalent)
   dtype = trt.nptype(engine.get_binding_dtype(binding))
   # Allocate page-locked memory (i.e., pinned memory) buffers
   host_mem = cuda.pagelocked_empty(size, dtype)
   # Allocate linear piece of device memory
   device_mem = cuda.mem_alloc(host_mem.nbytes)
   # Append the device buffer to device bindings
   bindings.append(int(device_mem))
   # Append to inputs/ouputs list
   if engine.binding_is_input(binding):
   inputs.append(HostDeviceMem(host_mem, device_mem))
   else:
   outputs.append(HostDeviceMem(host_mem, device_mem))
   # Create a stream (to eventually copy inputs/outputs and run inference)
   stream = cuda.Stream()
   return inputs, outputs, bindings, stream
   
   def infer(context, bindings, inputs, outputs, stream, batch_size=1):
   """
   Infer outputs on the IExecutionContext for the specified inputs
   """
   # Transfer input data to the GPU
   [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
   # Run inference
   context.execute_async(batch_size=batch_size, bindings=bindings, 
stream_handle=stream.handle)
   # Transfer predictions back from the GPU
   [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
   # Synchronize the stream
   stream.synchronize()
   # Return the host outputs
   return [out.host for out in outputs]
   
   # 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py
   # Simple helper data class that's a little nicer to use than a 2-tuple.
   class HostDeviceMem(object):
   def __init__(self, host_mem, device_mem):
   self.host = host_mem
   self.device = device_mem
   
   def __str__(self):
   return "Host:\n" + str(self.host) + "\nDevice:\n" + str(self.device)
   
   def __repr__(self):
   return self.__str__()
   
   image = np.zeros((1, 3, 224, 224))  # dummy data
   
   # Read the serialized ICudaEngine
   with open(trt_engine_path, 'rb') as f, trt.Runtime(TRT_LOGGER) as runtime:
   # Deserialize ICudaEngine
   engine = runtime.deserialize_cuda_engine(f.read())
   # Now just as with the onnx2trt samples...
   # Create an IExecutionContext (context for executing inference)
   with engine.create_execution_context() as context:
   # Allocate memory for inputs/outputs
   inputs, outputs, bindings, stream = allocate_buffers(engine)
   # Set host input to the image
   inputs[0].host = image
   # Inference
   trt_outputs = infer(context, bindings=bindings, inputs=inputs, 
outputs=outputs, stream=stream)
   # Prediction
   pred_id = np.argmax(trt_outputs[-1])
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit edited a comment on issue #16173: Saving and loading cudNN 
autotune and graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537932809
 
 
   Thank you for the feedback @KellenSunderland.
   I think the mini-batchsize should be included as a caching specification as 
well because optimization techniques like TensorRT depend on it.
   
   A different approach would be to define a save and load function for the 
[Executor 
class](https://github.com/apache/incubator-mxnet/blob/992c3c0dd90c0723de6934e826a49bad6569eeac/include/mxnet/executor.h#L53).
   The memory file of an executor handle would contain all additional platform 
specific definitions and optimization results. This would allow the user to run 
the full binding process once on a specific platform and later the option to 
bind it much quicker:
   
   ```python
   # mxnet/executor.py
   def save(filename_exec, type):
   """Saves the executor handle including specific optimization of the graph.
   Must be run after the executor handle was binded: `model.bind()`.
   
   Parameters
   --
   filename : str
   Path to the executor file (e.g. "executor.exec").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```
   
   In order to preferably avoid an additional copy of the model parameters, one 
needs to specify the `.params` and `.symbol` filepath when loading the executor 
handle. This would also enable to update the model parameters independently 
from the optimization cache:
   
   ```python
   # mxnet/executor.py
   def load(filename_exec, filename_symbol, filename_params):
   """Loads and binds the executor handle.
   
   Parameters
   --
   filename_exec : str
   Path to the executor file (e.g. "executor.exec").
   filename_symbol : str
   Path to the model architecture definition (e.g. "model.symbol").
   filename_params : str
   Path to the model weights (e.g. "model.params").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit edited a comment on issue #16173: Saving and loading cudNN 
autotune and graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537934625
 
 
   Regarding the export of a TensorRT executor handle (@Caenorst),
   the [ONNX-TensorRT repository](https://github.com/onnx/onnx-tensorrt) 
provides an executable to generate an TensorRT engine file from an ONNX-model:
   
   ```
   onnx2trt my_model.onnx -o my_engine.trt
   ```
   Alternatively, one can use the the C++-API instead:
   
   ```
   NvOnnxParser.h
   NvOnnxParserTypedefs.h
   ```
   
   Later the engine file can be reloaded from memory:
   Here is an example python code for this using code fragements from 
https://github.com/onnx/onnx-tensorrt/issues/180 and 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py.
 
   Unfortunately, I haven't found an example in C++ for this yet:
   
   
   ```python
   import pycuda.autoinit
   import pycuda.driver as cuda
   import tensorrt as trt
   import numpy as np
   
   trt_engine_path = 'my_engine.trt'
   # initialize
   TRT_LOGGER = trt.Logger(trt.Logger.INFO)
   trt.init_libnvinfer_plugins(TRT_LOGGER, '')
   runtime = trt.Runtime(TRT_LOGGER)
   
   # https://github.com/onnx/onnx-tensorrt/issues/180
   def allocate_buffers(engine):
   """
   Allocates all buffers required for the specified engine
   """
   inputs = []
   outputs = []
   bindings = []
   # Iterate over binding names in engine
   for binding in engine:
   # Get binding (tensor/buffer) size
   size = trt.volume(engine.get_binding_shape(binding)) * 
engine.max_batch_size
   # Get binding (tensor/buffer) data type (numpy-equivalent)
   dtype = trt.nptype(engine.get_binding_dtype(binding))
   # Allocate page-locked memory (i.e., pinned memory) buffers
   host_mem = cuda.pagelocked_empty(size, dtype)
   # Allocate linear piece of device memory
   device_mem = cuda.mem_alloc(host_mem.nbytes)
   # Append the device buffer to device bindings
   bindings.append(int(device_mem))
   # Append to inputs/ouputs list
   if engine.binding_is_input(binding):
   inputs.append(HostDeviceMem(host_mem, device_mem))
   else:
   outputs.append(HostDeviceMem(host_mem, device_mem))
   # Create a stream (to eventually copy inputs/outputs and run inference)
   stream = cuda.Stream()
   return inputs, outputs, bindings, stream
   
   def infer(context, bindings, inputs, outputs, stream, batch_size=1):
   """
   Infer outputs on the IExecutionContext for the specified inputs
   """
   # Transfer input data to the GPU
   [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
   # Run inference
   context.execute_async(batch_size=batch_size, bindings=bindings, 
stream_handle=stream.handle)
   # Transfer predictions back from the GPU
   [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
   # Synchronize the stream
   stream.synchronize()
   # Return the host outputs
   return [out.host for out in outputs]
   
   # 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py
   # Simple helper data class that's a little nicer to use than a 2-tuple.
   class HostDeviceMem(object):
   def __init__(self, host_mem, device_mem):
   self.host = host_mem
   self.device = device_mem
   
   def __str__(self):
   return "Host:\n" + str(self.host) + "\nDevice:\n" + str(self.device)
   
   def __repr__(self):
   return self.__str__()
   
   image = np.zeros((1, 3, 224, 224))  # dummy data
   
   # Read the serialized ICudaEngine
   with open(trt_engine_path, 'rb') as f, trt.Runtime(TRT_LOGGER) as runtime:
   # Deserialize ICudaEngine
   engine = runtime.deserialize_cuda_engine(f.read())
   # Now just as with the onnx2trt samples...
   # Create an IExecutionContext (context for executing inference)
   with engine.create_execution_context() as context:
   # Allocate memory for inputs/outputs
   inputs, outputs, bindings, stream = allocate_buffers(engine)
   # Set host input to the image
   inputs[0].host = image
   # Inference
   trt_outputs = infer(context, bindings=bindings, inputs=inputs, 
outputs=outputs, stream=stream)
   # Prediction
   pred_id = np.argmax(trt_outputs[-1])
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit edited a comment on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit edited a comment on issue #16173: Saving and loading cudNN 
autotune and graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537932809
 
 
   Thank you for the feedback @KellenSunderland.
   I think the mini-batchsize should be included as a caching specification as 
well because optimization techniques like TensorRT depend on it.
   
   A different approach would be to define a save and load function for the 
[Executor 
class](https://github.com/apache/incubator-mxnet/blob/992c3c0dd90c0723de6934e826a49bad6569eeac/include/mxnet/executor.h#L53).
   The memory file of an executor handle would contain all additional platform 
specific definitions and optimization results. This would allow the user to run 
the full binding process once on a specific platform and later the option to 
bind it much quicker:
   
   ```python
   # mxnet/executor.py
   def save(filename_exec, type):
   """Saves the executor handle including specific optimization of the graph.
   Must be run after the executor handle was binded: `model.bind()`.
   
   Parameters
   --
   filename : str
   Path to the executor file (e.g. "executor.exec").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```
   
   In order to preferably avoid an additional copy of the model parameters, one 
needs to specifiy the `.params` and `.symbol` filepath when loading the 
executor handle:
   
   ```python
   # mxnet/executor.py
   def load(filename_exec, filename_symbol, filename_params):
   """Loads and binds the executor handle.
   
   Parameters
   --
   filename_exec : str
   Path to the executor file (e.g. "executor.exec").
   filename_symbol : str
   Path to the model architecture definition (e.g. "model.symbol").
   filename_params : str
   Path to the model weights (e.g. "model.params").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537934625
 
 
   Regarding the export of a TensorRT executor handle (@Caenorst),
   the [ONNX-TensorRT repository](https://github.com/onnx/onnx-tensorrt) 
provides an executable to generate an TensorRT engine file from an ONNX-model:
   
   ```
   onnx2trt my_model.onnx -o my_engine.trt
   ```
   Alternatively, one can use the the C++-API instead:
   
   ```
   NvOnnxParser.h
   NvOnnxParserTypedefs.h
   ```
   
   Later the engine file can be reloaded from memory:
   Here is an example python code for this using code fragements from 
https://github.com/onnx/onnx-tensorrt/issues/180 and 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py.
 
   Unfortunately I haven't found an example in C++ for this yet:
   
   
   ```python
   import pycuda.autoinit
   import pycuda.driver as cuda
   import tensorrt as trt
   import numpy as np
   
   trt_engine_path = 'my_engine.trt'
   # initialize
   TRT_LOGGER = trt.Logger(trt.Logger.INFO)
   trt.init_libnvinfer_plugins(TRT_LOGGER, '')
   runtime = trt.Runtime(TRT_LOGGER)
   
   # https://github.com/onnx/onnx-tensorrt/issues/180
   def allocate_buffers(engine):
   """
   Allocates all buffers required for the specified engine
   """
   inputs = []
   outputs = []
   bindings = []
   # Iterate over binding names in engine
   for binding in engine:
   # Get binding (tensor/buffer) size
   size = trt.volume(engine.get_binding_shape(binding)) * 
engine.max_batch_size
   # Get binding (tensor/buffer) data type (numpy-equivalent)
   dtype = trt.nptype(engine.get_binding_dtype(binding))
   # Allocate page-locked memory (i.e., pinned memory) buffers
   host_mem = cuda.pagelocked_empty(size, dtype)
   # Allocate linear piece of device memory
   device_mem = cuda.mem_alloc(host_mem.nbytes)
   # Append the device buffer to device bindings
   bindings.append(int(device_mem))
   # Append to inputs/ouputs list
   if engine.binding_is_input(binding):
   inputs.append(HostDeviceMem(host_mem, device_mem))
   else:
   outputs.append(HostDeviceMem(host_mem, device_mem))
   # Create a stream (to eventually copy inputs/outputs and run inference)
   stream = cuda.Stream()
   return inputs, outputs, bindings, stream
   
   def infer(context, bindings, inputs, outputs, stream, batch_size=1):
   """
   Infer outputs on the IExecutionContext for the specified inputs
   """
   # Transfer input data to the GPU
   [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
   # Run inference
   context.execute_async(batch_size=batch_size, bindings=bindings, 
stream_handle=stream.handle)
   # Transfer predictions back from the GPU
   [cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
   # Synchronize the stream
   stream.synchronize()
   # Return the host outputs
   return [out.host for out in outputs]
   
   # 
https://github.com/NVIDIA/object-detection-tensorrt-example/blob/master/SSD_Model/utils/common.py
   # Simple helper data class that's a little nicer to use than a 2-tuple.
   class HostDeviceMem(object):
   def __init__(self, host_mem, device_mem):
   self.host = host_mem
   self.device = device_mem
   
   def __str__(self):
   return "Host:\n" + str(self.host) + "\nDevice:\n" + str(self.device)
   
   def __repr__(self):
   return self.__str__()
   
   image = np.zeros((1, 3, 224, 224))  # dummy data
   
   # Read the serialized ICudaEngine
   with open(trt_engine_path, 'rb') as f, trt.Runtime(TRT_LOGGER) as runtime:
   # Deserialize ICudaEngine
   engine = runtime.deserialize_cuda_engine(f.read())
   # Now just as with the onnx2trt samples...
   # Create an IExecutionContext (context for executing inference)
   with engine.create_execution_context() as context:
   # Allocate memory for inputs/outputs
   inputs, outputs, bindings, stream = allocate_buffers(engine)
   # Set host input to the image
   inputs[0].host = image
   # Inference
   trt_outputs = infer(context, bindings=bindings, inputs=inputs, 
outputs=outputs, stream=stream)
   # Prediction
   pred_id = np.argmax(trt_outputs[-1])
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization

2019-10-03 Thread GitBox
QueensGambit commented on issue #16173: Saving and loading cudNN autotune and 
graph optimization
URL: 
https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-537932809
 
 
   Thank you for the feedback @KellenSunderland.
   I think the mini-batchsize should be included as a caching specification as 
well because optimization techniques like TensorRT depend on it.
   
   A different approach would be to define a save and load function for the 
[Executor 
class](https://github.com/apache/incubator-mxnet/blob/992c3c0dd90c0723de6934e826a49bad6569eeac/include/mxnet/executor.h#L53).
   The memory file of an executor object would contain all additional plattform 
specific definitions and optimization results. This would allow the user to run 
the full binding process once on a specific plattform and later the option to 
bind it much quicker:
   
   ```python
   # mxnet/executor.py
   def save(filename_exec, type):
   """Saves the executor handle including specific optimization of the graph.
   Must be run after the executor handle was binded: `model.bind()`.
   
   Parameters
   --
   filename : str
   Path to the executor file (e.g. "executor.exec").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```
   
   In order to preferably avoid an additional copy of the model parameters, one 
needs to specifiy the `.params` and `.symbol` filepath when loading the 
executor handle:
   
   ```python
   # mxnet/executor.py
   def load(filename_exec, filename_symbol, filename_params):
   """Loads and binds the executor handle.
   
   Parameters
   --
   filename_exec : str
   Path to the executor file (e.g. "executor.exec").
   filename_symbol : str
   Path to the model architecture definition (e.g. "model.symbol").
   filename_params : str
   Path to the model weights (e.g. "model.params").
   References
   --
   `Saving and Loading of Executor handles \
   `_
   """
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-03 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 9ce2b2a  Bump the publish timestamp.
9ce2b2a is described below

commit 9ce2b2a3a72564ed2db73c1f468810b1a38055ea
Author: mxnet-ci 
AuthorDate: Thu Oct 3 12:36:14 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..163fa3d
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Oct  3 12:36:14 UTC 2019



[GitHub] [incubator-mxnet] marcoabreu commented on issue #16363: Julia: add API docs back

2019-10-03 Thread GitBox
marcoabreu commented on issue #16363: Julia: add API docs back
URL: https://github.com/apache/incubator-mxnet/pull/16363#issuecomment-537904494
 
 
   Is there a chance to automatically determine the files for autodoc instead 
of having to hartcode them?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] marcoabreu merged pull request #16363: Julia: add API docs back

2019-10-03 Thread GitBox
marcoabreu merged pull request #16363: Julia: add API docs back
URL: https://github.com/apache/incubator-mxnet/pull/16363
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >