[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17416: [Numpy] add polynomial polyval

2020-02-13 Thread GitBox
haojin2 commented on a change in pull request #17416: [Numpy] add polynomial 
polyval
URL: https://github.com/apache/incubator-mxnet/pull/17416#discussion_r378702418
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -7287,6 +7287,72 @@ def hybrid_forward(self, F, a):
 check_unary_func("isnan")
 check_unary_func("isinf")
 
+@with_seed()
 
 Review comment:
   blank line above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378704118
 
 

 ##
 File path: scala-package/pom.xml
 ##
 @@ -423,23 +590,11 @@
   1.7.7
   provided
 
-
-  org.scalatest
-  scalatest_2.11
 
 Review comment:
   I tried to do this. But it doesn't working. Looks like 
`maven-remote-resources-plugin` doesn't understand properties in dependencies. 
Here is error log 
https://gist.github.com/fomkin/a26aefc59eaca1148dfed3719a8144f0 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378704118
 
 

 ##
 File path: scala-package/pom.xml
 ##
 @@ -423,23 +590,11 @@
   1.7.7
   provided
 
-
-  org.scalatest
-  scalatest_2.11
 
 Review comment:
   I tried to do this. But it doesn't working. Looks like 
`maven-remote-resources-plugin` doesn't understand profile-depended properties 
in dependencies. Here is error log 
https://gist.github.com/fomkin/a26aefc59eaca1148dfed3719a8144f0 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585603916
 
 
   @zachgk 
   > In BertQA.java lines 77 and 79, try to use null as the second argument to 
the softmaxParam constructor.
   
   Thanks. Now it compiles.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17530: Add deferred compute support

2020-02-13 Thread GitBox
samskalicky commented on a change in pull request #17530: Add deferred compute 
support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r378708288
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -83,7 +83,7 @@ class NDArray {
  public:
   /*! \brief default constructor */
   NDArray()
-: entry_(nullptr) {
+: autograd_entry_(nullptr) {
 
 Review comment:
   why dont we initialize `deferredcompute_entry_` in these constructors?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Yiyan66 opened a new pull request #17586: [numpy] add op random.f

2020-02-13 Thread GitBox
Yiyan66 opened a new pull request #17586: [numpy] add op random.f
URL: https://github.com/apache/incubator-mxnet/pull/17586
 
 
   ## Description ##
   add op random.f
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378708695
 
 

 ##
 File path: 
scala-package/core/src/test/scala/org/apache/mxnet/NDArraySuite.scala
 ##
 @@ -357,24 +357,24 @@ class NDArraySuite extends FunSuite with 
BeforeAndAfterAll with Matchers {
   val step = scala.util.Random.nextFloat() * 4
   val repeat = 1
 
-  val result1 = (start.toDouble until stop.toDouble by step.toDouble)
+  val result1 = (BigDecimal(start) until BigDecimal(stop) by 
BigDecimal(step))
 .flatMap(x => Array.fill[Float](repeat)(x.toFloat))
   val range1 = NDArray.arange(start = start, stop = Some(stop), step = 
step,
 repeat = repeat)
   assert(CheckUtils.reldiff(result1.toArray, range1.toArray) <= 1e-4f)
 
-  val result2 = (0.0 until stop.toDouble by step.toDouble)
+  val result2 = (BigDecimal(0.0) until BigDecimal(stop) by 
BigDecimal(step))
 .flatMap(x => Array.fill[Float](repeat)(x.toFloat))
   val range2 = NDArray.arange(stop, step = step, repeat = repeat)
   assert(CheckUtils.reldiff(result2.toArray, range2.toArray) <= 1e-4f)
 
-  val result3 = 0f to stop by 1f
+  val result3 = (BigDecimal(0) until BigDecimal(stop) by 
BigDecimal(1)).map(_.toFloat)
 
 Review comment:
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378685736
 
 

 ##
 File path: scala-package/assembly/pom.xml
 ##
 @@ -26,7 +26,7 @@
 ../pom.xml
   
 
-  mxnet-full_2.11
+  mxnet-full
 
 Review comment:
   This is an artifactId in the Maven _reactor_ scope. Concrete Scala version 
suffix adds on deploy/install phase (see `deploy/pom.xml`). It means that 
artifact published to the Central will have proper Scala version suffix. So it 
shouldn't break other users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17530: Add deferred compute support

2020-02-13 Thread GitBox
samskalicky commented on a change in pull request #17530: Add deferred compute 
support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r378709061
 
 

 ##
 File path: python/mxnet/_deferred_compute.py
 ##
 @@ -0,0 +1,95 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Deferred Compute for NDArray."""
+
+import ctypes
+import contextlib
+
+from .base import _LIB, check_call, SymbolHandle, _as_list
+from .symbol import Symbol
+
+__all__ = []
+
+def is_deferred_compute():
+"""Get status of deferred compute mode."""
+curr = ctypes.c_bool()
+check_call(_LIB.MXNDArrayIsDeferredComputeEnabled(ctypes.byref(curr)))
+return curr.value
+
+def set_deferred_compute(is_deferred_compute):
+"""Enable / Disable deferred compute mode.
+
+Parameters
+--
+is_deferred_compute: bool
+
+Returns
+---
+Previous deferred compute state.
+"""
+prev = ctypes.c_int()
+check_call(_LIB.MXNDArraySetDeferredComputeEnabled(
+ctypes.c_int(is_deferred_compute), ctypes.byref(prev)))
+return bool(prev.value)
+
+
+@contextlib.contextmanager
+def context():
+# Like other MXNet context manager, this bleeds state across concurrent
 
 Review comment:
   does this have any issues with the work in #16654 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378712251
 
 

 ##
 File path: scala-package/pom.xml
 ##
 @@ -423,23 +590,11 @@
   1.7.7
   provided
 
-
-  org.scalatest
-  scalatest_2.11
 
 Review comment:
   Well. It solves by removing _default_ properties: 
`maven-remote-resources-plugin` ignores the property from a profile if it has 
default values.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17511: Implement all miscellaneous ops

2020-02-13 Thread GitBox
connorgoggins commented on a change in pull request #17511: Implement all 
miscellaneous ops
URL: https://github.com/apache/incubator-mxnet/pull/17511#discussion_r378716903
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -437,6 +458,26 @@ def get_all_loss_operators():
 return loss_mx_operators
 
 
+def get_remaining_miscellaneous_operators():
+"""Gets remaining Miscellaneous operators registered with MXNet not 
covered by individual tests.
+
+Returns
+---
+{"operator_name": {"has_backward", "nd_op_handle", "params"}}
+"""
+misc_ops = ['squeeze', 'all_finite', 'clip', 'multi_lars', 
'SequenceReverse', 'SequenceLast', 'SequenceMask', 'cast_storage', 'cumsum', 
'fill_element_0index']
+
+# Get all mxnet operators
+mx_operators = _get_all_mxnet_operators()
+
+# Filter for Miscellaneous operators
+misc_mx_operators = {}
+for op_name, _ in mx_operators.items():
+if op_name in misc_ops:
 
 Review comment:
   Great insights @larroy, I’ll make those changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hkvision opened a new issue #17587: Need help: issues with manual build whl

2020-02-13 Thread GitBox
hkvision opened a new issue #17587: Need help: issues with manual build whl
URL: https://github.com/apache/incubator-mxnet/issues/17587
 
 
   Hi,
   
   Are there any instructions to build the whl for mxnet that is equivalent to 
the whl that you release on pypi?
   
   I have compiled the project using mkl with libmxnet.so under 
incubator-mxnet/lib, then I run `python setup.py bdist_wheel` under 
`incubator-mxnet/python` and get a whl under `dist`, if I pip install this whl, 
I am encounting the following issues when import mxnet:
   - Cannot find libmxnet.so, I am using conda, and seems the file is under 
`/root/anaconda3/envs/py36/mxnet/libmxnet.so` instead of in 
`/root/anaconda3/envs/py36/lib/python3.6/site-packages/mxnet/` and cannot be 
found by code.
   - Wondering will the so files for libopencv included?
   - dmlc_tracker package is not installed. However, if I install from released 
pypi, it is installed automatically. Related issue: 
https://discuss.mxnet.io/t/cant-load-dmlc-tracker-package/1282/3
   
   Thanks so much in advance!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #17328: [numpy] add op pad

2020-02-13 Thread GitBox
haojin2 commented on a change in pull request #17328: [numpy] add op pad
URL: https://github.com/apache/incubator-mxnet/pull/17328#discussion_r378723574
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -6877,3 +6876,132 @@ def bincount(x, weights=None, minlength=0):
 if weights is None:
 return _npi.bincount(x, minlength=minlength, has_weights=False)
 return _npi.bincount(x, weights=weights, minlength=minlength, 
has_weights=True)
+
+
+@set_module('mxnet.ndarray.numpy')
+def pad(x, pad_width=None, mode="constant", stat_length=None, 
constant_values=0, end_values=0, reflect_type="even"): # pylint: 
disable=too-many-arguments
+"""
+Pad an array.
+
+Parameters
+--
+array : array_like of rank N
+The array to pad.
+pad_width : {sequence, array_like, int}
+Number of values padded to the edges of each axis.
+((before_1, after_1), ... (before_N, after_N)) unique pad widths
+for each axis.
+((before, after),) yields same before and after pad for each axis.
+(pad,) or int is a shortcut for before = after = pad width for all
+axes.
+mode : str or function, optional
+One of the following string values or a user supplied function.
+'constant' (default)
+Pads with a constant value.
+'edge'
+Pads with the edge values of array.
+'linear_ramp'
+not supported yet
+'maximum'
+Pads with the maximum value of all of the
+vector along each axis.
+'mean'
+not supported yet
+'median'
+   not supported yet
+'minimum'
+Pads with the minimum value of all of the
+vector along each axis.
+'reflect'
+Pads with the reflection of the vector mirrored on
+the first and last values of the vector along each
+axis.
+'symmetric'
+Pads with the reflection of the vector mirrored
+along the edge of the array.
+'wrap'
+not supported yet
+'empty'
+Pads with undefined values.
+.. versionadded:: 1.17
+
 
 Review comment:
   we don't support this mode so we should remove it from the doc. Please also 
check other parts of the docs to accurately reflect our implementation


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 merged pull request #16990: [numpy] add op matmul

2020-02-13 Thread GitBox
haojin2 merged pull request #16990: [numpy] add op matmul
URL: https://github.com/apache/incubator-mxnet/pull/16990
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (93c123d -> f5a1014)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 93c123d  [OpPerf] Add norm, cast ops, remaining optimizer ops (#17542)
 add f5a1014  [numpy] add op matmul (#16990)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 102 -
 python/mxnet/numpy/multiarray.py   | 105 -
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  58 ++-
 src/operator/numpy/np_matmul_op-inl.h  | 425 +
 src/operator/numpy/np_matmul_op.cc | 174 +
 .../numpy/{np_cumsum.cu => np_matmul_op.cu}|  17 +-
 .../python/unittest/test_numpy_interoperability.py |  98 +
 tests/python/unittest/test_numpy_op.py | 135 +++
 9 files changed, 1100 insertions(+), 15 deletions(-)
 create mode 100644 src/operator/numpy/np_matmul_op-inl.h
 create mode 100644 src/operator/numpy/np_matmul_op.cc
 copy src/operator/numpy/{np_cumsum.cu => np_matmul_op.cu} (72%)



[incubator-mxnet] branch master updated (93c123d -> f5a1014)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 93c123d  [OpPerf] Add norm, cast ops, remaining optimizer ops (#17542)
 add f5a1014  [numpy] add op matmul (#16990)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  | 102 -
 python/mxnet/numpy/multiarray.py   | 105 -
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  58 ++-
 src/operator/numpy/np_matmul_op-inl.h  | 425 +
 src/operator/numpy/np_matmul_op.cc | 174 +
 .../numpy/{np_cumsum.cu => np_matmul_op.cu}|  17 +-
 .../python/unittest/test_numpy_interoperability.py |  98 +
 tests/python/unittest/test_numpy_op.py | 135 +++
 9 files changed, 1100 insertions(+), 15 deletions(-)
 create mode 100644 src/operator/numpy/np_matmul_op-inl.h
 create mode 100644 src/operator/numpy/np_matmul_op.cc
 copy src/operator/numpy/{np_cumsum.cu => np_matmul_op.cu} (72%)



[GitHub] [incubator-mxnet] lilipj commented on issue #15920: KeyError using onnx2mx on basic Keras LSTM model

2020-02-13 Thread GitBox
lilipj commented on issue #15920: KeyError using onnx2mx on basic Keras LSTM 
model
URL: 
https://github.com/apache/incubator-mxnet/issues/15920#issuecomment-585635223
 
 
   I have the same problem with keras2onnx, and onnxmltools, but when I change 
the target_opset to 8 in onnxmltool.convert_keras function, it works. When 
target_opset is 9 , it fails


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] lilipj commented on issue #16590: import_onnx.py parser for onnx opset >= 9 has bug

2020-02-13 Thread GitBox
lilipj commented on issue #16590: import_onnx.py parser for onnx opset >= 9 has 
bug
URL: 
https://github.com/apache/incubator-mxnet/issues/16590#issuecomment-585636375
 
 
   I have the same problem with keras2onnx.
   I tried onnxmltools to do the conversion from keras to onnx, it fails, same 
error. But when I change the target_opset to 8 in onnxmltool.convert_keras() 
function, it works


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn commented on issue #16803: src/storage/./pooled_storage_manager.h:157: cudaMalloc failed: out of memory

2020-02-13 Thread GitBox
Jerryzcn commented on issue #16803: src/storage/./pooled_storage_manager.h:157: 
cudaMalloc failed: out of memory
URL: 
https://github.com/apache/incubator-mxnet/issues/16803#issuecomment-585640155
 
 
   what version of mxnet are u using
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Jerryzcn opened a new issue #17588: OOM when training GluonCV Mask/Faster RCNN

2020-02-13 Thread GitBox
Jerryzcn opened a new issue #17588: OOM when training GluonCV Mask/Faster RCNN
URL: https://github.com/apache/incubator-mxnet/issues/17588
 
 
   On P3.16x. I wasn't able to train either Faster or Mask RCNN resnet50_v1b w/ 
FPN
   This happens after version 1.6.0b20191016. The next nightly build 
1.6.0b20191029 will OOM after 10 epochs of training.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ymzx commented on issue #16803: src/storage/./pooled_storage_manager.h:157: cudaMalloc failed: out of memory

2020-02-13 Thread GitBox
ymzx commented on issue #16803: src/storage/./pooled_storage_manager.h:157: 
cudaMalloc failed: out of memory
URL: 
https://github.com/apache/incubator-mxnet/issues/16803#issuecomment-585642305
 
 
   > what version of mxnet are u using
   @Jerryzcn 
   mxnet.__version__  is  '1.4.0'


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585675253
 
 
   But throws exception :face_with_head_bandage: 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378685736
 
 

 ##
 File path: scala-package/assembly/pom.xml
 ##
 @@ -26,7 +26,7 @@
 ../pom.xml
   
 
-  mxnet-full_2.11
+  mxnet-full
 
 Review comment:
   This is an artifactId in the Maven _reactor_ scope. Concrete Scala version 
suffix adds on install phase (see `deploy/pom.xml`). It means that artifact 
published to the Central will have proper Scala version suffix. So it shouldn't 
break other users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378685736
 
 

 ##
 File path: scala-package/assembly/pom.xml
 ##
 @@ -26,7 +26,7 @@
 ../pom.xml
   
 
-  mxnet-full_2.11
+  mxnet-full
 
 Review comment:
   This is an artifactId in the Maven _reactor_ scope. Concrete Scala version 
suffix adds on deploy phase (see `deploy/pom.xml`). It means that artifact 
published to the Central will have proper Scala version suffix. So it shouldn't 
break other users.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on a change in pull request #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#discussion_r378800395
 
 

 ##
 File path: scala-package/assembly/pom.xml
 ##
 @@ -26,7 +26,7 @@
 ../pom.xml
   
 
-  mxnet-full_2.11
+  mxnet-full
 
 Review comment:
   When install scala-package locally, `~/.m2/repository` will contain artifact 
without suffix. I sure that it advanced usage option, and user know what he or 
she doing. At least he or she explicitly sets profile with Scala version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #17586: [numpy] add op random.f

2020-02-13 Thread GitBox
xidulu commented on a change in pull request #17586: [numpy] add op random.f
URL: https://github.com/apache/incubator-mxnet/pull/17586#discussion_r378808923
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3650,6 +3650,38 @@ def _test_random_beta_range(output):
 assert _test_random_beta_range(mx_out_imperative.asnumpy()) == True
 
 
+@with_seed()
+@use_np
+def test_np_random_f():
+class TestRandomF(HybridBlock):
+def __init__(self, size=None, ctx=None):
+super(TestRandomF, self).__init__()
+self._size = size
+self._ctx = ctx
 
 Review comment:
   No need for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-13 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 3464873  Bump the publish timestamp.
3464873 is described below

commit 34648730862d3dca5a114cd49b13e6b730c6ef72
Author: mxnet-ci 
AuthorDate: Thu Feb 13 12:42:43 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..790c061
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Feb 13 12:42:43 UTC 2020



[GitHub] [incubator-mxnet] Chilipp commented on issue #17561: pin Sphinx due to autodocsumm issue with v2.4.0

2020-02-13 Thread GitBox
Chilipp commented on issue #17561: pin Sphinx due to autodocsumm issue with 
v2.4.0
URL: https://github.com/apache/incubator-mxnet/pull/17561#issuecomment-585771489
 
 
   hey @aaronmarkham: This PR can be reverted as the new version of autodocsumm 
(0.1.12) works well with sphinx 2.4.0 (see 
https://github.com/Chilipp/autodocsumm/issues/22#issuecomment-585771064)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gigasquid commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
gigasquid commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585801589
 
 
   > @fomkin Removing the examples may fix the CI now, but it will be bad for 
the project down the line as those examples would no longer be maintained. It 
is odd that those examples were not working. I looked at the current problem. 
In BertQA.java lines 77 and 79, try to use `null` as the second argument to the 
softmaxParam constructor. If this doesn't work or other examples are broken, I 
can help fix them or you can ask whoever created the example (see git blame).
   > 
   > @gigasquid Would this affect clojure at all?
   
   No - That shouldn't affect Clojure. Thanks for thinking about us though :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585825758
 
 
   @gigasquid May you look at this 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-17503/7/pipeline#step-600-log-1655
 ? What I had do wrong?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585829852
 
 
   @lanking520 may you look at `BertQA.java`? The code use obsolete API. It 
never compiled in CI before because of bug in `scala-maven-plugin`. Now it 
compiles and breaks the build. I tried to fix this giving same NDArray as 
length to softmaxParam. It became compilable but tests fails with the error: 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-17503/7/pipeline#step-804-log-1708


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] gigasquid commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
gigasquid commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585834306
 
 
   > @gigasquid May you look at this 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-17503/7/pipeline#step-600-log-1655
 ? What I had do wrong?
   
   I stand corrected. It does affect the Clojure package as well. A library we 
use depends on the old version of scala and would need to be forked/upgraded to 
work https://github.com/t6/from-scala/blob/master/project.clj


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
TaoLv commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585834936
 
 
   You may also want to set `use_length=True`? See the API definition: 
https://mxnet.incubator.apache.org/api/python/docs/api/symbol/symbol.html#mxnet.symbol.softmax


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin edited a comment on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin edited a comment on issue #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585840193
 
 
   @gigasquid Looks like it simpler to downgrade Scala 2.11 back to 2.11.8. 
Upgrading Scala version is not the purpose of this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585840193
 
 
   @gigasquid Looks like it simple to downgrade Scala 2.11 back to 2.11.8. 
Upgrading Scala version is not the purpose of this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] fomkin edited a comment on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
fomkin edited a comment on issue #17503: Add Scala 2.12 and 2.13 
cross-compilation (#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585840193
 
 
   @gigasquid Looks like it simpler to downgrade Scala 2.11 back to 2.11.8. 
Upgrading minor Scala version is not the purpose of this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17521: cmake: don't build PTX and 3.5 arch if cuda arch detection fails

2020-02-13 Thread GitBox
leezu commented on issue #17521: cmake: don't build PTX and 3.5 arch if cuda 
arch detection fails
URL: https://github.com/apache/incubator-mxnet/pull/17521#issuecomment-585855436
 
 
   @ptrendx let's merge this PR as is? Without it users can run into #16852 
easily.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 opened a new issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
sl1pkn07 opened a new issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: https://github.com/apache/incubator-mxnet/issues/17589
 
 
   ~~~
   Scanning dependencies of target cpp_package_op_h
   make[2]: Leaving directory '/tmp/makepkg/sl1-mxnet-git/src/build'
   make -f cpp-package/CMakeFiles/cpp_package_op_h.dir/build.make 
cpp-package/CMakeFiles/cpp_package_op_h.dir/build
   make[2]: Entering directory '/tmp/makepkg/sl1-mxnet-git/src/build'
   cd /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/cpp-package/scripts && 
echo Running:\ OpWrapperGenerator.py
   Running: OpWrapperGenerator.py
   cd /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/cpp-package/scripts && 
python OpWrapperGenerator.py 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1
   Traceback (most recent call last):
 File "OpWrapperGenerator.py", line 433, in 
   raise(e)
 File "OpWrapperGenerator.py", line 427, in 
   f.write(patternStr % ParseAllOps())
 File "OpWrapperGenerator.py", line 321, in ParseAllOps
   cdll.libmxnet = cdll.LoadLibrary(sys.argv[1])
 File "/usr/lib/python3.8/ctypes/__init__.py", line 451, in LoadLibrary
   return self._dlltype(name)
 File "/usr/lib/python3.8/ctypes/__init__.py", line 373, in __init__
   self._handle = _dlopen(self._name, mode)
   OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined 
symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs
   make[2]: *** [cpp-package/CMakeFiles/cpp_package_op_h.dir/build.make:59: 
cpp-package/CMakeFiles/cpp_package_op_h] Error 1
   ~~~
   
   
   linux
   python 3.8.1
   mxnet f5a1014479351f3139b92c437d65de3a3a653196
   gcc 8.3.0, also happen with gcc 9.2.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
leezu commented on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585862892
 
 
   Please provide full cmake configuration and build logs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 commented on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
sl1pkn07 commented on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585863953
 
 
   ~~~
cmake ../incubator-mxnet \
   -DCMAKE_BUILD_TYPE=None \
   -DCMAKE_INSTALL_PREFIX=/usr \
   -DCMAKE_INSTALL_LIBDIR=lib \
   -DCMAKE_INSTALL_DOCDIR=share/doc/mxnet \
   -DENABLE_CUDA_RTC=ON \
   -DBUILD_SHARED_LIBS=ON \
   -DBUILD_CPP_EXAMPLES=OFF \
   -DUSE_CCACHE=OFF \
   -DUSE_CXX14_IF_AVAILABLE=ON \
   -DUSE_CPP_PACKAGE=ON \
   -DUSE_CUDNN=ON \
   -DUSE_NCCL=ON \
   -DUSE_OPENCV=ON \
   -DUSE_OPENMP=ON \
   -DUSE_MKLDNN=OFF \
   -DUSE_LAPACK=ON \
   -DUSE_JEMALLOC=OFF \
   -DUSE_GPERFTOOLS=OFF \
   -DNCCL_ROOT=/usr \
   -DCUDA_HOST_COMPILER=/usr/bin/cc-8 \
   -DCMAKE_C_COMPILER=/usr/bin/cc-8 \
   -DCMAKE_C_COMPILER_AR=/usr/bin/gcc-ar-8 \
   -DCMAKE_C_COMPILER_RANLIB=/usr/bin/gcc-ranlib-8 \
   -DCMAKE_CXX_COMPILER=/usr/bin/c++-8 \
   -DCMAKE_CXX_COMPILER_AR=/usr/bin/gcc-ar-8 \
   -DCMAKE_CXX_COMPILER_RANLIB=/usr/bin/gcc-ranlib-8
   ~~~
   
   the build log is too extense


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17530: Add deferred compute support

2020-02-13 Thread GitBox
leezu commented on a change in pull request #17530: Add deferred compute support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r379009156
 
 

 ##
 File path: include/mxnet/imperative.h
 ##
 @@ -88,6 +89,77 @@ class Imperative {
  && info.out_grads.size() == 1;
 }
   };
+
+  /*! \brief DCInfo datastructure to enable deferred computation */
+  class DCInfo {
+   public:
+DCInfo() {
+  // Default constructor provided for the sake of any.h. Should not be 
used.
+  throw std::invalid_argument("Unsupported default constructor");
+}
 
 Review comment:
   Good catch. This was only needed in an earlier, unpublished version of this 
PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #17561: pin Sphinx due to autodocsumm issue with v2.4.0

2020-02-13 Thread GitBox
aaronmarkham commented on issue #17561: pin Sphinx due to autodocsumm issue 
with v2.4.0
URL: https://github.com/apache/incubator-mxnet/pull/17561#issuecomment-585876293
 
 
   > hey @aaronmarkham: This PR can be reverted as the new version of 
autodocsumm (0.1.12) works well with sphinx 2.4.0 (see [Chilipp/autodocsumm#22 
(comment)](https://github.com/Chilipp/autodocsumm/issues/22#issuecomment-585771064))
   
   Awesome! I'll try it out. Thanks @Chilipp 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17530: Add deferred compute support

2020-02-13 Thread GitBox
leezu commented on a change in pull request #17530: Add deferred compute support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r379012093
 
 

 ##
 File path: include/mxnet/ndarray.h
 ##
 @@ -83,7 +83,7 @@ class NDArray {
  public:
   /*! \brief default constructor */
   NDArray()
-: entry_(nullptr) {
+: autograd_entry_(nullptr) {
 
 Review comment:
   It's not required? We just use the `NodeEntry` default constructor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] guanxinq commented on a change in pull request #17569: [WIP] Adding sparse support to MXTensor for custom operators

2020-02-13 Thread GitBox
guanxinq commented on a change in pull request #17569: [WIP] Adding sparse 
support to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r379018612
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,54 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+struct ChunkDense {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // shape of data.
+  std::vector shape;
+  // Context of data.
+  // MXContext ctx;
+};
+
+struct ChunkSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // length  of data.
+  int64_t data_lens;
+
+  // To store aux data for sparse.
+  // for row_sparse, aux_data[0] = indices
+  // for csr, aux_data[0] = indptr, aux_data[1] = indices
+  std::vector> aux_data;
+
+  // Lens of the aux_data.
+  // for row_sparse, aux_lens[0] = len(indices)
+  // for csr, aux_lens[0] = len(indptr), aux_lens[1] = len(indices)
+  std::vector aux_lens;
+  // Context of data.
+  // MXContext ctx;
+};
+
 /*!
  * \brief Tensor data structure used by custom operator
  */
 struct MXTensor {
-  MXTensor() : data_ptr(NULL), dtype(kUNSET), verID(0) {}
+  MXTensor() : data_ptr(nullptr), dtype(kUNSET), verID(0), 
stype(kDefaultStorage) {}
 
+  // Construtor for dense.
   MXTensor(void *data_ptr, const std::vector &shape, MXDType dtype,
-   size_t vID, MXContext mx_ctx)
-  : data_ptr(data_ptr), shape(shape), dtype(dtype), verID(vID), ctx(mx_ctx) {}
+   size_t vID, MXContext mx_ctx, MXStorageType stype = kDefaultStorage)
 
 Review comment:
   Adding default storage type here could enable the sparse support without any 
changing to previous implementation for dense. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
leezu commented on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585885483
 
 
   Without the build log, it's impossible to advise about the root cause of 
your issue.
   
   You can attach the file in a comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu edited a comment on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
leezu edited a comment on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585885483
 
 
   Without the build log and the log of above cmake file, it's impossible to 
advise about the root cause of your issue.
   
   You can attach the file in a comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] guanxinq commented on a change in pull request #17569: [WIP] Adding sparse support to MXTensor for custom operators

2020-02-13 Thread GitBox
guanxinq commented on a change in pull request #17569: [WIP] Adding sparse 
support to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r379018612
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,54 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+struct ChunkDense {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // shape of data.
+  std::vector shape;
+  // Context of data.
+  // MXContext ctx;
+};
+
+struct ChunkSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // length  of data.
+  int64_t data_lens;
+
+  // To store aux data for sparse.
+  // for row_sparse, aux_data[0] = indices
+  // for csr, aux_data[0] = indptr, aux_data[1] = indices
+  std::vector> aux_data;
+
+  // Lens of the aux_data.
+  // for row_sparse, aux_lens[0] = len(indices)
+  // for csr, aux_lens[0] = len(indptr), aux_lens[1] = len(indices)
+  std::vector aux_lens;
+  // Context of data.
+  // MXContext ctx;
+};
+
 /*!
  * \brief Tensor data structure used by custom operator
  */
 struct MXTensor {
-  MXTensor() : data_ptr(NULL), dtype(kUNSET), verID(0) {}
+  MXTensor() : data_ptr(nullptr), dtype(kUNSET), verID(0), 
stype(kDefaultStorage) {}
 
+  // Construtor for dense.
   MXTensor(void *data_ptr, const std::vector &shape, MXDType dtype,
-   size_t vID, MXContext mx_ctx)
-  : data_ptr(data_ptr), shape(shape), dtype(dtype), verID(vID), ctx(mx_ctx) {}
+   size_t vID, MXContext mx_ctx, MXStorageType stype = kDefaultStorage)
 
 Review comment:
   Adding default storage type here could enable the sparse support without any 
changing to previous implementation for dense. Will keeping the "convenience" 
default cause any problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #17530: Add deferred compute support

2020-02-13 Thread GitBox
leezu commented on a change in pull request #17530: Add deferred compute support
URL: https://github.com/apache/incubator-mxnet/pull/17530#discussion_r379021790
 
 

 ##
 File path: python/mxnet/_deferred_compute.py
 ##
 @@ -0,0 +1,95 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Deferred Compute for NDArray."""
+
+import ctypes
+import contextlib
+
+from .base import _LIB, check_call, SymbolHandle, _as_list
+from .symbol import Symbol
+
+__all__ = []
+
+def is_deferred_compute():
+"""Get status of deferred compute mode."""
+curr = ctypes.c_bool()
+check_call(_LIB.MXNDArrayIsDeferredComputeEnabled(ctypes.byref(curr)))
+return curr.value
+
+def set_deferred_compute(is_deferred_compute):
+"""Enable / Disable deferred compute mode.
+
+Parameters
+--
+is_deferred_compute: bool
+
+Returns
+---
+Previous deferred compute state.
+"""
+prev = ctypes.c_int()
+check_call(_LIB.MXNDArraySetDeferredComputeEnabled(
+ctypes.c_int(is_deferred_compute), ctypes.byref(prev)))
+return bool(prev.value)
+
+
+@contextlib.contextmanager
+def context():
+# Like other MXNet context manager, this bleeds state across concurrent
 
 Review comment:
   This refers to problems if users use 
https://docs.python.org/3/library/asyncio.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17569: [WIP] Adding sparse support to MXTensor for custom operators

2020-02-13 Thread GitBox
samskalicky commented on a change in pull request #17569: [WIP] Adding sparse 
support to MXTensor for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17569#discussion_r379025696
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -229,20 +241,54 @@ enum MXReturnValue {
   MX_SUCCESS = 1,
 };
 
+struct ChunkDense {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // shape of data.
+  std::vector shape;
+  // Context of data.
+  // MXContext ctx;
+};
+
+struct ChunkSparse {
+  // Pointer to data.
+  void *data{nullptr};
+  // Size of data in bytes.
+  size_t dataSize{0};
+  // length  of data.
+  int64_t data_lens;
+
+  // To store aux data for sparse.
+  // for row_sparse, aux_data[0] = indices
+  // for csr, aux_data[0] = indptr, aux_data[1] = indices
+  std::vector> aux_data;
+
+  // Lens of the aux_data.
+  // for row_sparse, aux_lens[0] = len(indices)
+  // for csr, aux_lens[0] = len(indptr), aux_lens[1] = len(indices)
+  std::vector aux_lens;
+  // Context of data.
+  // MXContext ctx;
+};
+
 /*!
  * \brief Tensor data structure used by custom operator
  */
 struct MXTensor {
-  MXTensor() : data_ptr(NULL), dtype(kUNSET), verID(0) {}
+  MXTensor() : data_ptr(nullptr), dtype(kUNSET), verID(0), 
stype(kDefaultStorage) {}
 
+  // Construtor for dense.
   MXTensor(void *data_ptr, const std::vector &shape, MXDType dtype,
-   size_t vID, MXContext mx_ctx)
-  : data_ptr(data_ptr), shape(shape), dtype(dtype), verID(vID), ctx(mx_ctx) {}
+   size_t vID, MXContext mx_ctx, MXStorageType stype = kDefaultStorage)
 
 Review comment:
   Here, where we create the MXTensor objects, we should set whether the 
MXTensor is a dense/CSR/RowSparse everytime, so theres no need for a default:
   
https://github.com/apache/incubator-mxnet/blob/f5a1014479351f3139b92c437d65de3a3a653196/include/mxnet/lib_api.h#L1114-L1118
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 merged pull request #17416: [Numpy] add polynomial polyval

2020-02-13 Thread GitBox
haojin2 merged pull request #17416: [Numpy] add polynomial polyval
URL: https://github.com/apache/incubator-mxnet/pull/17416
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 merged pull request #17302: [numpy]add op random.logistic, random.gumbel

2020-02-13 Thread GitBox
haojin2 merged pull request #17302: [numpy]add op random.logistic, random.gumbel
URL: https://github.com/apache/incubator-mxnet/pull/17302
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (f5a1014 -> fafb888)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f5a1014  [numpy] add op matmul (#16990)
 add fafb888  add polyval (#17416)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  66 +++-
 python/mxnet/numpy/multiarray.py   |  54 +-
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  40 ++-
 src/operator/numpy/np_polynomial_op-inl.h  | 100 +
 src/operator/numpy/np_polynomial_op.cc | 120 +
 src/operator/numpy/np_polynomial_op.cu |  93 
 .../python/unittest/test_numpy_interoperability.py |  15 +++
 tests/python/unittest/test_numpy_op.py |  67 
 9 files changed, 548 insertions(+), 8 deletions(-)
 create mode 100644 src/operator/numpy/np_polynomial_op-inl.h
 create mode 100644 src/operator/numpy/np_polynomial_op.cc
 create mode 100644 src/operator/numpy/np_polynomial_op.cu



[incubator-mxnet] branch master updated (f5a1014 -> fafb888)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f5a1014  [numpy] add op matmul (#16990)
 add fafb888  add polyval (#17416)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  66 +++-
 python/mxnet/numpy/multiarray.py   |  54 +-
 python/mxnet/numpy_dispatch_protocol.py|   1 +
 python/mxnet/symbol/numpy/_symbol.py   |  40 ++-
 src/operator/numpy/np_polynomial_op-inl.h  | 100 +
 src/operator/numpy/np_polynomial_op.cc | 120 +
 src/operator/numpy/np_polynomial_op.cu |  93 
 .../python/unittest/test_numpy_interoperability.py |  15 +++
 tests/python/unittest/test_numpy_op.py |  67 
 9 files changed, 548 insertions(+), 8 deletions(-)
 create mode 100644 src/operator/numpy/np_polynomial_op-inl.h
 create mode 100644 src/operator/numpy/np_polynomial_op.cc
 create mode 100644 src/operator/numpy/np_polynomial_op.cu



[incubator-mxnet] branch master updated (fafb888 -> 48aa701)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fafb888  add polyval (#17416)
 add 48aa701  [numpy]add op random.logistic, random.gumbel (#17302)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/random.py   | 106 +
 python/mxnet/numpy/random.py   | 127 ++
 python/mxnet/symbol/numpy/random.py| 107 -
 .../np_location_scale_op.cc}   |  40 +-
 .../np_location_scale_op.cu}   |  24 +-
 src/operator/numpy/random/np_location_scale_op.h   | 449 +
 tests/nightly/test_np_random.py|  40 +-
 tests/python/unittest/test_numpy_op.py |  26 +-
 8 files changed, 870 insertions(+), 49 deletions(-)
 copy src/operator/numpy/{linalg/np_norm_forward.cc => 
random/np_location_scale_op.cc} (50%)
 copy src/operator/numpy/{np_tril_op.cu => random/np_location_scale_op.cu} (53%)
 create mode 100644 src/operator/numpy/random/np_location_scale_op.h



[incubator-mxnet] branch master updated (fafb888 -> 48aa701)

2020-02-13 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from fafb888  add polyval (#17416)
 add 48aa701  [numpy]add op random.logistic, random.gumbel (#17302)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/random.py   | 106 +
 python/mxnet/numpy/random.py   | 127 ++
 python/mxnet/symbol/numpy/random.py| 107 -
 .../np_location_scale_op.cc}   |  40 +-
 .../np_location_scale_op.cu}   |  24 +-
 src/operator/numpy/random/np_location_scale_op.h   | 449 +
 tests/nightly/test_np_random.py|  40 +-
 tests/python/unittest/test_numpy_op.py |  26 +-
 8 files changed, 870 insertions(+), 49 deletions(-)
 copy src/operator/numpy/{linalg/np_norm_forward.cc => 
random/np_location_scale_op.cc} (50%)
 copy src/operator/numpy/{np_tril_op.cu => random/np_location_scale_op.cu} (53%)
 create mode 100644 src/operator/numpy/random/np_location_scale_op.h



[GitHub] [incubator-mxnet] haojin2 closed pull request #16944: Temporarily disabling some tests to alleviate CI problem

2020-02-13 Thread GitBox
haojin2 closed pull request #16944: Temporarily disabling some tests to 
alleviate CI problem
URL: https://github.com/apache/incubator-mxnet/pull/16944
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu merged pull request #17582: Fix Apache RAT License check

2020-02-13 Thread GitBox
leezu merged pull request #17582: Fix Apache RAT License check
URL: https://github.com/apache/incubator-mxnet/pull/17582
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (48aa701 -> 755541c)

2020-02-13 Thread lausen
This is an automated email from the ASF dual-hosted git repository.

lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 48aa701  [numpy]add op random.logistic, random.gumbel (#17302)
 add 755541c  apache-rat: use binary release instead of build from source 
(#17582)

No new revisions were added by this update.

Summary of changes:
 Makefile | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)



[GitHub] [incubator-mxnet] ptrendx merged pull request #17521: cmake: don't build PTX and 3.5 arch if cuda arch detection fails

2020-02-13 Thread GitBox
ptrendx merged pull request #17521: cmake: don't build PTX and 3.5 arch if cuda 
arch detection fails
URL: https://github.com/apache/incubator-mxnet/pull/17521
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (755541c -> d004c2b)

2020-02-13 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 755541c  apache-rat: use binary release instead of build from source 
(#17582)
 add d004c2b  cmake: don't build PTX and 3.5 arch if cuda arch detection 
fails (#17521)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |   4 +-
 LICENSE|   3 +-
 cmake/{Modules => upstream}/FindCUDAToolkit.cmake  |   0
 cmake/upstream/select_compute_arch.cmake   | 299 +
 .../nightly/apache_rat_license_check/rat-excludes  |   4 +-
 tools/license_header.py|   3 +-
 6 files changed, 308 insertions(+), 5 deletions(-)
 rename cmake/{Modules => upstream}/FindCUDAToolkit.cmake (100%)
 create mode 100644 cmake/upstream/select_compute_arch.cmake



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-13 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new a8440d6  Bump the publish timestamp.
a8440d6 is described below

commit a8440d68584f195ae9563493fc86504798efd7f1
Author: mxnet-ci 
AuthorDate: Thu Feb 13 18:59:52 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e9c96ff
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Feb 13 18:59:52 UTC 2020



[GitHub] [incubator-mxnet] zachgk commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation (#16438)

2020-02-13 Thread GitBox
zachgk commented on issue #17503: Add Scala 2.12 and 2.13 cross-compilation 
(#16438)
URL: https://github.com/apache/incubator-mxnet/pull/17503#issuecomment-585921461
 
 
   > You may also want to set `use_length=True`? See the API definition: 
https://mxnet.incubator.apache.org/api/python/docs/api/symbol/symbol.html#mxnet.symbol.softmax
   
   @TaoLv Is something wrong with the softmax operator? It says that the length 
argument is required, but I think it is supposed to be optional (and used to be 
optional). The examples in the documentation you linked to don't include a 
length argument. The current problem is that one of our java examples 
originally called `softmax(data)`, but now the signature is `softmax(data, 
length)` so we need to figure out what to pass for the length argument.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya edited a comment on issue #17331: [mxnet 2.0] [item 2.4] Turning on large tensor support by default

2020-02-13 Thread GitBox
ChaiBapchya edited a comment on issue #17331: [mxnet 2.0] [item 2.4] Turning on 
large tensor support by default
URL: 
https://github.com/apache/incubator-mxnet/issues/17331#issuecomment-580146186
 
 
   [OpPerf] : Indexing Ops https://github.com/apache/incubator-mxnet/pull/16253 
[Merged]
   [OpPerf] : Neural Network Loss Ops 
https://github.com/apache/incubator-mxnet/pull/17482 [Merged]
   [OpPerf] : Consolidate array manipulation related operators #17487 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 closed issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
sl1pkn07 closed issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: https://github.com/apache/incubator-mxnet/issues/17589
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 commented on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
sl1pkn07 commented on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585957652
 
 
   seems fixed
   
   build opencv (with cuda support) and openexr with gcc 8 (instead gcc 9) the 
error is gone
   
   sorry for the noise


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sl1pkn07 edited a comment on issue #17589: OSError: /tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: _ZN4dmlc14RecordIOReader10NextRecordEPSs

2020-02-13 Thread GitBox
sl1pkn07 edited a comment on issue #17589: OSError: 
/tmp/makepkg/sl1-mxnet-git/src/build/libmxnet.so.1.5.1: undefined symbol: 
_ZN4dmlc14RecordIOReader10NextRecordEPSs
URL: 
https://github.com/apache/incubator-mxnet/issues/17589#issuecomment-585957652
 
 
   seems fixed
   
   build opencv(4.2.0) (with cuda support) and openexr with gcc 8 (instead gcc 
9) the error is gone
   
   sorry for the noise


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16408: Add MXNet Ops for fast multihead attention

2020-02-13 Thread GitBox
leezu commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-586006743
 
 
   BTW, `cublasGemmStridedBatchedEx` used in this PR is broken in Cuda 10.1 
which will cause crashes on p2 instances. Seems fixed in Cuda 10.2 (as per 
release notes).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #17462: Updated PartialSortSmallK for LT support

2020-02-13 Thread GitBox
access2rohit commented on issue #17462: Updated PartialSortSmallK for LT support
URL: https://github.com/apache/incubator-mxnet/pull/17462#issuecomment-586007972
 
 
   @mxnet-label-bot update [pr-awaiting-merge]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #17462: Updated PartialSortSmallK for LT support

2020-02-13 Thread GitBox
access2rohit commented on issue #17462: Updated PartialSortSmallK for LT support
URL: https://github.com/apache/incubator-mxnet/pull/17462#issuecomment-586007892
 
 
   @apeforest can you review and merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency commented on issue #17486: Update CustomOp doc with changes for GPU support

2020-02-13 Thread GitBox
rondogency commented on issue #17486: Update CustomOp doc with changes for GPU 
support
URL: https://github.com/apache/incubator-mxnet/pull/17486#issuecomment-586023537
 
 
   @a550461053 Thanks for the questions. Currently you cannot call MXNet 
built-in operator from MXNet. We are making a plan for adding this support, but 
it cannot be easily implemented and may be a separate project. Also I added a 
snippet of how to use mutable input, hope it would help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency edited a comment on issue #17486: Update CustomOp doc with changes for GPU support

2020-02-13 Thread GitBox
rondogency edited a comment on issue #17486: Update CustomOp doc with changes 
for GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17486#issuecomment-586023537
 
 
   @a550461053 Thanks for the questions. Currently you cannot call built-in 
operators from MXNet. We are making a plan for adding this support, but it 
cannot be easily implemented and may be a separate project. Also I added a 
snippet of how to use mutable input, hope it would help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rondogency edited a comment on issue #17486: Update CustomOp doc with changes for GPU support

2020-02-13 Thread GitBox
rondogency edited a comment on issue #17486: Update CustomOp doc with changes 
for GPU support
URL: https://github.com/apache/incubator-mxnet/pull/17486#issuecomment-586023537
 
 
   @a550461053 Thanks for the questions. Currently you cannot call MXNet 
built-in operators from a custom operator. We are making a plan for adding this 
support, but it cannot be easily implemented and may be a separate project. 
Also I added a snippet of how to use mutable input, hope it would help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] D-Roberts opened a new pull request #17590: [WIP][numpy] Implement Weibull backward

2020-02-13 Thread GitBox
D-Roberts opened a new pull request #17590: [WIP][numpy] Implement Weibull 
backward
URL: https://github.com/apache/incubator-mxnet/pull/17590
 
 
   ## Description ##
   Add backwards implementation to np.random.weibull
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - [ ] Code is well-documented: 
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Add back; add test of grad.
   - Restrict a>0.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2020-02-13 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new d52a451  Bump the publish timestamp.
d52a451 is described below

commit d52a45198674d5ed133ce13a2b61dfa4b3e77556
Author: mxnet-ci 
AuthorDate: Fri Feb 14 00:42:42 2020 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..c0e8f49
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Fri Feb 14 00:42:42 UTC 2020



[GitHub] [incubator-mxnet] apeforest merged pull request #17511: Implement all miscellaneous ops

2020-02-13 Thread GitBox
apeforest merged pull request #17511: Implement all miscellaneous ops
URL: https://github.com/apache/incubator-mxnet/pull/17511
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-13 Thread GitBox
apeforest commented on a change in pull request #17487: [OpPerf] Consolidate 
array manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379200487
 
 

 ##
 File path: benchmark/opperf/README.md
 ##
 @@ -72,6 +73,8 @@ python incubator-mxnet/benchmark/opperf/opperf.py 
--output-format json --output-
 
 3. **dtype** : By default, `float32`. You can override and set the global 
dtype for all operator benchmarks. Example: --dtype float64.
 
+4. **profiler** : By default, 'native'. You can override and set the global 
profiler for all operator benchmarks. Example: --profiler 'python'.
 
 Review comment:
   I know what it means. Could you provide more information to first time users?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (d004c2b -> 8438d98)

2020-02-13 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d004c2b  cmake: don't build PTX and 3.5 arch if cuda arch detection 
fails (#17521)
 add 8438d98  Implement all miscellaneous ops (#17511)

No new revisions were added by this update.

Summary of changes:
 benchmark/opperf/nd_operations/misc_operators.py | 124 +++
 benchmark/opperf/opperf.py   |   4 +
 benchmark/opperf/rules/default_params.py |  36 ++-
 benchmark/opperf/utils/benchmark_utils.py|   2 +-
 benchmark/opperf/utils/op_registry_utils.py  |  70 -
 benchmark/opperf/utils/profiler_utils.py |  11 +-
 6 files changed, 217 insertions(+), 30 deletions(-)
 create mode 100644 benchmark/opperf/nd_operations/misc_operators.py



[incubator-mxnet] branch master updated (d004c2b -> 8438d98)

2020-02-13 Thread apeforest
This is an automated email from the ASF dual-hosted git repository.

apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d004c2b  cmake: don't build PTX and 3.5 arch if cuda arch detection 
fails (#17521)
 add 8438d98  Implement all miscellaneous ops (#17511)

No new revisions were added by this update.

Summary of changes:
 benchmark/opperf/nd_operations/misc_operators.py | 124 +++
 benchmark/opperf/opperf.py   |   4 +
 benchmark/opperf/rules/default_params.py |  36 ++-
 benchmark/opperf/utils/benchmark_utils.py|   2 +-
 benchmark/opperf/utils/op_registry_utils.py  |  70 -
 benchmark/opperf/utils/profiler_utils.py |  11 +-
 6 files changed, 217 insertions(+), 30 deletions(-)
 create mode 100644 benchmark/opperf/nd_operations/misc_operators.py



[GitHub] [incubator-mxnet] apeforest commented on issue #17462: Updated PartialSortSmallK for LT support

2020-02-13 Thread GitBox
apeforest commented on issue #17462: Updated PartialSortSmallK for LT support
URL: https://github.com/apache/incubator-mxnet/pull/17462#issuecomment-586046568
 
 
   Could we decide the data type at runtime? This operator seems very general 
and we should try to prevent any memory regression if possible. @ptrendx please 
review it for the GPU performance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] a550461053 commented on issue #17486: Update CustomOp doc with changes for GPU support

2020-02-13 Thread GitBox
a550461053 commented on issue #17486: Update CustomOp doc with changes for GPU 
support
URL: https://github.com/apache/incubator-mxnet/pull/17486#issuecomment-586058857
 
 
   > > Ok, thank you. I want to create a custom operator calling another 
operator which input NDArray. Both operator is async pushed to engine, I think 
this way is possible. Also, I am not clear on using mutateInputs API, how and 
when to use it? If you can provide a example of the mutateInputs API, I will be 
grateful to you~
   > 
   > Hi @a550461053 currently the custom operator design focused on separating 
the custom operator from the MXNet backend source code complexity. This means 
that your custom operator can (must) be entirely separated from MXNet. So you 
cannot call a regular built-in MXNet operator from your custom operator. We 
have an item here #17006 for adding support in the future to be able to do 
this, but it is not implemented yet.
   > 
   > As for how to use mutateInputs, it works exactly as the doc describes:
   > 
   > > This function allows you to mark some inputs to be mutable inputs. It is 
useful when using aux parameters for BatchNorm-like operators.
   > 
   > So lets say you have an operator with 5 inputs, you can mark the indices 
of the inputs that you want to be mutable like this (for example mark the last 
two inputs as mutable):
   > 
   > ```
   > MXReturnValue batchNorm_mutateInputs(std::map 
attrs,
   >  std::vector &input_indices) {
   >   // mark mutable inputs   
 
   >   input_indices.push_back(3);
   >   input_indices.push_back(4);
   >   return MX_SUCCESS;
   > }
   > ```
   
   Thank you and @rondogency . I am clear on use of mutateInputs API. 
   But now I can only build mxnet to create c++ customOp and I have a problem 
on compiling into python wheel at #17577 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on issue #17577: build mxnet from source and get ImportError: cannot import name 'NDArrayHandle'

2020-02-13 Thread GitBox
samskalicky commented on issue #17577: build mxnet from source and get 
ImportError: cannot import name 'NDArrayHandle'
URL: 
https://github.com/apache/incubator-mxnet/issues/17577#issuecomment-586060538
 
 
   @apeforest any idea on kvstore?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r379217321
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -117,9 +117,13 @@ def prepare_op_inputs(op, arg_params):
 
 # 3d tensor is needed by following ops
 ops_3d = ['CTCLoss', 'ctc_loss']
-
+
 
 Review comment:
   nit: fix indent
   if its the only change needed don't do it in this PR to prevent CI reruns 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r379217519
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -117,9 +117,13 @@ def prepare_op_inputs(op, arg_params):
 
 # 3d tensor is needed by following ops
 ops_3d = ['CTCLoss', 'ctc_loss']
-
+
 
 Review comment:
   Actually you need to rebase anyway so fix it in this PR 😛 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
apeforest commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586062120
 
 
   @roywei Using the test script in 
https://github.com/apache/incubator-mxnet/pull/13896
   Build | runtime (before) | runtime (after) 
   ---|---|---
   CPU w/ MKL | 262 ms ± 1.2 ms | 337 ms ± 12.5 ms
   
   
   Using python timer to measure CPU performance with MKL:
   
   This PR:
   
   ```
   [{'Dropout': [{'avg_time_Dropout': 1.1714265774935484, 'p50_time_Dropout': 
1.1715246364474297, 'p90_time_Dropout': 1.190436165779829, 'p99_time_Dropout': 
1.2154309218749404, 'inputs': {'data': (1024, 1024)}}]}]
   ```
   
   Master:
   ```
   [{'Dropout': [{'avg_time_Dropout': 0.6394564639776945, 'p50_time_Dropout': 
0.6996351294219494, 'p90_time_Dropout': 1.045508868992329, 'p99_time_Dropout': 
1.59036863129586, 'inputs': {'data': (1024, 1024)}}]}]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest edited a comment on issue #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
apeforest edited a comment on issue #16735: Use single-bit for mask in dropout 
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586062120
 
 
   @roywei Using the test script in 
https://github.com/apache/incubator-mxnet/pull/13896
   
   Build | runtime (before) | runtime (after) 
   ---|---|---
   CPU w/ MKL | 262 ms ± 1.2 ms | 337 ms ± 12.5 ms
   
   
   Using python timer to measure CPU performance with MKL:
   
   This PR:
   
   ```
   [{'Dropout': [{'avg_time_Dropout': 1.1714265774935484, 'p50_time_Dropout': 
1.1715246364474297, 'p90_time_Dropout': 1.190436165779829, 'p99_time_Dropout': 
1.2154309218749404, 'inputs': {'data': (1024, 1024)}}]}]
   ```
   
   Master:
   ```
   [{'Dropout': [{'avg_time_Dropout': 0.6394564639776945, 'p50_time_Dropout': 
0.6996351294219494, 'p90_time_Dropout': 1.045508868992329, 'p99_time_Dropout': 
1.59036863129586, 'inputs': {'data': (1024, 1024)}}]}]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate 
array manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379224557
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -137,26 +140,24 @@ def prepare_op_inputs(op, arg_params):
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_int"]
 elif (op.startswith(('random','sample')) or op in float_only) and 
arg_name == "dtype":
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_float"]
-elif "NDArray" in arg_type and op == "ravel_multi_index":
-arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"]
 elif op in custom_data and arg_name + "_" + op.lower() in 
DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_" + op.lower()]
-elif "NDArray" in arg_type and arg_name + "_nd" in DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"]
-elif "NDArray" in arg_type and op in ops_4d and arg_name + "_4d" in 
DEFAULTS_INPUTS:
+elif op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_4d"]
-elif "NDArray" in arg_type and op in ops_3d and arg_name + "_3d" in 
DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"]
-elif "NDArray" in arg_type and op == 'softmax_cross_entropy':
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"]
+elif op in ops_dim1 and arg_name + "_dim1" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_dim1"]
+elif "NDArray" in arg_type:
+if op == "ravel_multi_index":
+arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"]
+elif arg_name + "_nd" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"]
+elif op in ops_3d and arg_name + "_3d" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"]
+elif op == 'softmax_cross_entropy':
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"]
+# default case
 elif arg_name in DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name]
-elif "float" in arg_type and arg_name + "_float" in DEFAULTS_INPUTS:
 
 Review comment:
   both ifs are never reached because the default case "arg_name" in 
DEFAULTS_INPUTS is reached already. It is the base case.
   for eg if there are 2 keys "x" and "x_float" it will always find x thanks to 
`elif arg_name in DEFAULTS_INPUTS:`
   it will never reach `arg_name + "_float" in DEFAULTS_INPUTS`
   Hence had to be moved up.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate 
array manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379223963
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -137,26 +140,24 @@ def prepare_op_inputs(op, arg_params):
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_int"]
 elif (op.startswith(('random','sample')) or op in float_only) and 
arg_name == "dtype":
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_float"]
-elif "NDArray" in arg_type and op == "ravel_multi_index":
-arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"]
 elif op in custom_data and arg_name + "_" + op.lower() in 
DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_" + op.lower()]
-elif "NDArray" in arg_type and arg_name + "_nd" in DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"]
-elif "NDArray" in arg_type and op in ops_4d and arg_name + "_4d" in 
DEFAULTS_INPUTS:
+elif op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_4d"]
-elif "NDArray" in arg_type and op in ops_3d and arg_name + "_3d" in 
DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"]
-elif "NDArray" in arg_type and op == 'softmax_cross_entropy':
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"]
+elif op in ops_dim1 and arg_name + "_dim1" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_dim1"]
+elif "NDArray" in arg_type:
 
 Review comment:
   clubbed all the conditions falling specifically for NDArray args under the 
hood of NDArray for better readability.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate 
array manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379225052
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -137,26 +140,24 @@ def prepare_op_inputs(op, arg_params):
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_int"]
 elif (op.startswith(('random','sample')) or op in float_only) and 
arg_name == "dtype":
 arg_values[arg_name] = DEFAULTS_INPUTS["dtype_float"]
-elif "NDArray" in arg_type and op == "ravel_multi_index":
-arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"]
 elif op in custom_data and arg_name + "_" + op.lower() in 
DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_" + op.lower()]
-elif "NDArray" in arg_type and arg_name + "_nd" in DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"]
-elif "NDArray" in arg_type and op in ops_4d and arg_name + "_4d" in 
DEFAULTS_INPUTS:
+elif op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_4d"]
-elif "NDArray" in arg_type and op in ops_3d and arg_name + "_3d" in 
DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"]
-elif "NDArray" in arg_type and op == 'softmax_cross_entropy':
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"]
+elif op in ops_dim1 and arg_name + "_dim1" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_dim1"]
+elif "NDArray" in arg_type:
+if op == "ravel_multi_index":
+arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"]
+elif arg_name + "_nd" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"]
+elif op in ops_3d and arg_name + "_3d" in DEFAULTS_INPUTS:
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"]
+elif op == 'softmax_cross_entropy':
+arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"]
+# default case
 elif arg_name in DEFAULTS_INPUTS:
 arg_values[arg_name] = DEFAULTS_INPUTS[arg_name]
-elif "float" in arg_type and arg_name + "_float" in DEFAULTS_INPUTS:
-arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_float"]
-elif "Shape" in arg_type and arg_name + "_shape" in DEFAULTS_INPUTS:
 
 Review comment:
   Another non reachable if condition.
   axis and axis_shape both exist. And since `if arg_name in DEFAULTS_INPUTS:` 
is checked before ` arg_name + "_shape" in DEFAULTS_INPUTS` it will never be 
reached.
   
   Hence moved up before the default case check.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
TaoLv commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586069482
 
 
   Does the `avg_time_Dropout` include backward time? @apeforest 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate 
array manipulation related operators
URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379223794
 
 

 ##
 File path: benchmark/opperf/rules/default_params.py
 ##
 @@ -238,7 +247,13 @@
"data_3d": DEFAULT_DATA_3d,
"label_smce": DEFAULT_LABEL_SMCE,
"label": DEFAULT_LABEL,
-   "index": DEFAULT_INDEX,
 
 Review comment:
   this was duplicate hence removed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] QimingZheng opened a new issue #17591: Support for Partitioned Variables (used in large sparse models)

2020-02-13 Thread GitBox
QimingZheng opened a new issue #17591: Support for Partitioned Variables (used 
in large sparse models)
URL: https://github.com/apache/incubator-mxnet/issues/17591
 
 
   ## Description
   Support partitioned variables when training models with large embedding 
layers (e.g. recommendation system).
   
   ## Motivation
   
   Models [e.g. **Factorization Machines**(1) or **DeepFM**(2)] in 
recommendation tasks are usually very large, **billions of features** 
(including user id and product id) are used.
   
   In the setting of distributed training, if each worker holds a local copy of 
the embedding parameter, it could easily exceed the CPU-Mem constraint of one 
server. It's more appropriate to **shard the embedding variable to multiple 
servers** and manage each partition with the parameter servers, which is 
exactly what TF is doing now when training large sparse models [3].
   
   This motivation is also discussed in section 4.2 in TF-OSDI paper [4]. For 
example, TF manages large embedding layer by:
   
   ```python
   params= tf.get_variable("embedding", shape=(1, 128), dtype=tf.float32, \
partitioner = tf.min_max_variable_partitioner(\
max_partitions=num_ps_replicas,\
axis=0))
   
   tf.nn.embedding_lookup(params, ids, max_norm=None, name=None)
   
   # params: a list of tensors all of same shape except for the first dimension,
   # representing sharded embedding tensors, or PartitionedVariable
   ```
   
   So there is **only one copy of the embedding layer globally** in TF. While 
for MXNET, each worker contains one copy + PS maintains one copy, totally there 
are **N+1 copies** (N workers). This causes large memory consumption and makes 
the large model (larger than the CPU-MEM size of one server) training 
infeasible.
   
   So far as I know, MXNET has no equivalent concept of partitioned variables. 
Is it expected to be implemented in the near future?
   
   ## References
   1. Rendle, Steffen. "Factorization machines." 2010 IEEE International 
Conference on Data Mining. IEEE, 2010. 
https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf.
   2. Guo, Huifeng, et al. "DeepFM: a factorization-machine based neural 
network for CTR prediction." arXiv preprint arXiv:1703.04247 (2017). 
https://arxiv.org/abs/1703.04247.
   3. Embedding and Partitioned Variable in TF 2.0.

https://github.com/tensorflow/community/blob/master/rfcs/20190116-embedding-partitioned-variable.md.
   4. Martín Abadi, et al. "TensorFlow: A System for Large-Scale Machine 
Learning." 12th USENIX Symposium on Operating Systems Design and Implementation 
(OSDI ’16). 
https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
TaoLv commented on a change in pull request #16735: Use single-bit for mask in 
dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379233744
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector &out_grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   Let's also use blocking in the backward path:
   
   ```cpp
   const int blk_size = 64;
   const int nblk = count / blk_size;
   
   #pragma omp parallel for num_threads(nthr) schedule(static, 8)
   for (index_t b = 0; b < nblk; ++b) {
 for (index_t k = 0; k < blk_size; ++k) {
   index_t i = b * blk_size + k;
   auto mask_idx = i >> 3;  // div 8;
   uint8_t mask_offset = i & 7;  // mod 8
   bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
   ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 }
   }
   
   // tail
   if (nblk * blk_size < count) {
 for (index_t i = nblk * blk_size; i < count; ++i) {
   auto mask_idx = i >> 3;  // div 8;
   uint8_t mask_offset = i & 7;  // mod 8
   bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
   ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 }
   }
 }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu closed pull request #16840: [WIP]: Fix crash while unloading library.

2020-02-13 Thread GitBox
frankfliu closed pull request #16840: [WIP]: Fix crash while unloading library.
URL: https://github.com/apache/incubator-mxnet/pull/16840
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu opened a new pull request #17592: Add cmake build support for macOS static build.

2020-02-13 Thread GitBox
frankfliu opened a new pull request #17592: Add cmake build support for macOS 
static build.
URL: https://github.com/apache/incubator-mxnet/pull/17592
 
 
   ## Description ##
   Add cmake build support for macOS static build.
   Added --without-libidn2 flag for libcurl build.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ## Comments ##
   - CMake build still fail with libtiff 4.0.10 version. macOS build only work 
with 4.0.9, will create separate PR.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] frankfliu commented on issue #17581: macOS static build failed with 'cblas.h' file not found

2020-02-13 Thread GitBox
frankfliu commented on issue #17581: macOS static build failed with 'cblas.h' 
file not found
URL: 
https://github.com/apache/incubator-mxnet/issues/17581#issuecomment-586081119
 
 
   mac static build doesn't have CMake yet. I created a PR to add cmake 
support: https://github.com/apache/incubator-mxnet/pull/17592
   
   With CMake, this issue no longer exists.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hanke580 commented on a change in pull request #17567: [Numpy] Add op fmax

2020-02-13 Thread GitBox
hanke580 commented on a change in pull request #17567: [Numpy] Add op fmax
URL: https://github.com/apache/incubator-mxnet/pull/17567#discussion_r379250429
 
 

 ##
 File path: src/operator/numpy/np_elemwise_broadcast_op_extended.cc
 ##
 @@ -371,5 +371,36 @@ 
MXNET_OPERATOR_REGISTER_BINARY(_backward_npi_rldexp_scalar)
 .set_attr_parser([](NodeAttrs *attrs) { attrs->parsed = 
std::stod(attrs->dict["scalar"]); })
 .set_attr("FCompute", BinaryScalarOp::Backward);
 
+MXNET_OPERATOR_REGISTER_BINARY_BROADCAST(broadcast_fmax)
+.add_alias("_npi_fmax")
+.set_attr("FCompute", BinaryBroadcastCompute)
+.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_broadcast_fmax"});
+
+NNVM_REGISTER_OP(_backward_broadcast_fmax)
+.set_num_inputs(3)
+.set_num_outputs(2)
+.set_attr("TIsBackward", true)
+.set_attr("FInplaceOption",
+  [](const NodeAttrs& attrs){
+return std::vector >{{0, 1}};
+  })
+.set_attr("FResourceRequest",
+  [](const NodeAttrs& attrs) {
+return std::vector{ResourceRequest::kTempSpace};
+  })
+.set_attr("FCompute", BinaryBroadcastBackwardUseIn);
 
 Review comment:
   Resolved Thx


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16899: Enable MKL-DNN in pip packages

2020-02-13 Thread GitBox
TaoLv commented on issue #16899: Enable MKL-DNN in pip packages
URL: https://github.com/apache/incubator-mxnet/pull/16899#issuecomment-586105486
 
 
   @szha @leezu @samskalicky Is it good to go? Let me know if anything need be 
adjusted. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on issue #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
apeforest commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-586116890
 
 
   > Does the `avg_time_Dropout` include backward time? @apeforest
   
   Yes, it includes backward time as my `run_backward` is set to True


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #16735: Use single-bit for mask in dropout operator

2020-02-13 Thread GitBox
apeforest commented on a change in pull request #16735: Use single-bit for mask 
in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#discussion_r379267501
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -152,15 +181,20 @@ class DropoutOp {
   const std::vector &out_grad) {
 Stream *s = ctx.get_stream();
 Tensor grad = out_grad[dropout::kOut].FlatTo2D(s);
-Tensor mask = out_data[dropout::kMask].FlatTo2D(s);
+Tensor mask = out_data[dropout::kMask].FlatTo1D(s);
 Tensor gdata = in_grad[dropout::kData].FlatTo2D(s);
 DType *ingradptr = gdata.dptr_;
 const DType *outgradptr = grad.dptr_;
-const DType *maskptr = mask.dptr_;
-const int count = mask.shape_[0] * mask.shape_[1];
-#pragma omp parallel for 
num_threads(engine::OpenMP::Get()->GetRecommendedOMPThreadCount())
-for (int i = 0; i < count; ++i) {
-  ingradptr[i] = outgradptr[i] * maskptr[i];
+const uint8_t *maskptr = mask.dptr_;
+const index_t count = grad.shape_[0] * grad.shape_[1];
+const float pk_1 = 1.0f / this->pkeep_;
+const int nthr = engine::OpenMP::Get()->GetRecommendedOMPThreadCount();
+#pragma omp parallel for num_threads(nthr) schedule(static, 8)
+for (index_t i = 0; i < count; ++i) {
+  auto mask_idx = i >> 3;  // div 8;
+  uint8_t mask_offset = i & 7;  // mod 8
+  bool mask_val = maskptr[mask_idx] & (1U << mask_offset);
+  ingradptr[i] = outgradptr[i] * mask_val * pk_1;
 
 Review comment:
   Sure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #17562: Fail to import MXNet in distributed kvstore test

2020-02-13 Thread GitBox
reminisce commented on issue #17562: Fail to import MXNet in distributed 
kvstore test
URL: 
https://github.com/apache/incubator-mxnet/issues/17562#issuecomment-586120100
 
 
   The test failure is on Python2. The test should be launched using Python3 
instead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >