[incubator-mxnet] branch master updated (a807f6d -> 7908d7e)

2020-07-28 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a807f6d  [NumPy] loss for np array (#17196)
 add 7908d7e  [numpy] fix flaky mixed precision binary error (#18660)

No new revisions were added by this update.

Summary of changes:
 .../numpy/np_elemwise_broadcast_logic_op.cc| 42 --
 tests/python/unittest/test_numpy_op.py | 22 +++-
 2 files changed, 60 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated (a807f6d -> 7908d7e)

2020-07-28 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from a807f6d  [NumPy] loss for np array (#17196)
 add 7908d7e  [numpy] fix flaky mixed precision binary error (#18660)

No new revisions were added by this update.

Summary of changes:
 .../numpy/np_elemwise_broadcast_logic_op.cc| 42 --
 tests/python/unittest/test_numpy_op.py | 22 +++-
 2 files changed, 60 insertions(+), 4 deletions(-)



[incubator-mxnet] branch master updated: add support for np.ndarray in autograd.function (#18790)

2020-07-25 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 98b3f73  add support for np.ndarray in autograd.function (#18790)
98b3f73 is described below

commit 98b3f73bd0f30034e3f6848eb75d38c30c8b60b4
Author: Sheng Zha 
AuthorDate: Sat Jul 25 16:19:36 2020 -0700

add support for np.ndarray in autograd.function (#18790)
---
 python/mxnet/autograd.py   | 14 +---
 tests/python/unittest/test_autograd.py | 61 ++
 2 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/python/mxnet/autograd.py b/python/mxnet/autograd.py
index f968275..aac7cbc 100644
--- a/python/mxnet/autograd.py
+++ b/python/mxnet/autograd.py
@@ -28,6 +28,7 @@ from .base import NDArrayHandle, c_array, c_handle_array, 
c_array_buf, MXCallbac
 from .ndarray import NDArray, _ndarray_cls
 from .ndarray import _GRAD_REQ_MAP
 from .symbol import Symbol
+from .util import is_np_array
 
 
 def set_recording(is_recording): #pylint: disable=redefined-outer-name
@@ -448,25 +449,30 @@ class Function(object):
 outputs = (outputs,)
 
 key = Function._registry.inc()
+if is_np_array():
+from .numpy import ndarray
+array_cls = ndarray
+else:
+array_cls = NDArray
 
 def backward_entry(num_ograds, num_igrads, ptrs, reqs, is_train, _):
 """entry point for backward."""
 # pylint: disable=W0613
 try:
-output_grads = [NDArray(ctypes.cast(i, NDArrayHandle), 
writable=False) \
+output_grads = [array_cls(ctypes.cast(i, NDArrayHandle), 
writable=False) \
 for i in ptrs[:num_ograds]]
-input_grads = [NDArray(ctypes.cast(i, NDArrayHandle), 
writable=True) \
+input_grads = [array_cls(ctypes.cast(i, NDArrayHandle), 
writable=True) \
for i in ptrs[num_ograds:num_ograds+num_igrads]]
 reqs = [reqs[i] for i in range(num_igrads)]
 rets = self.backward(*output_grads)
-if isinstance(rets, NDArray):
+if isinstance(rets, array_cls):
 rets = (rets,)
 assert len(rets) == len(input_grads), \
 "%s.backward must return exactly the same number " \
 "of NDArrays as the number of NDArrays arguments to 
forward." \
 "Expecting %d got %d"%(self.__class__.name, 
len(input_grads), len(rets))
 for igrad, ret, req in zip(input_grads, rets, reqs):
-assert isinstance(ret, NDArray), \
+assert isinstance(ret, array_cls), \
 "autograd.Function.backward must return NDArrays, not 
%s"%type(ret)
 if req == 0:  # null
 return True
diff --git a/tests/python/unittest/test_autograd.py 
b/tests/python/unittest/test_autograd.py
index 6a75eed..f9a7ecc 100644
--- a/tests/python/unittest/test_autograd.py
+++ b/tests/python/unittest/test_autograd.py
@@ -407,6 +407,67 @@ def test_function1():
 
 @with_seed()
 @pytest.mark.garbage_expected
+@use_np
+def test_np_function():
+class func(Function):
+def forward(self, x, y):
+m = x / y
+n = x * y
+self.save_for_backward(x, y)
+return m, n
+
+def backward(self, dm, dn):
+x, y = self.saved_tensors
+dx = dm/y + dn*y
+dy = dn*x - dm * x / y / y
+return dx, dy
+
+f = func()
+x = mx.np.random.uniform(size=(10,))
+x.attach_grad()
+y = mx.np.random.uniform(size=(10,))
+y.attach_grad()
+with record():
+m, n = f(x, y)
+backward([m, n])
+
+dx1 = x.grad.asnumpy()
+dy1 = y.grad.asnumpy()
+
+with record():
+backward([x/y, x*y])
+
+# Non-zero atol required, as exposed by seed 630179191
+atol = 1e-6
+assert_almost_equal(x.grad.asnumpy(), dx1, atol=atol)
+assert_almost_equal(y.grad.asnumpy(), dy1, atol=atol)
+
+
+@with_seed()
+@pytest.mark.garbage_expected
+@use_np
+def test_np_function1():
+class Foo(mx.autograd.Function):
+def __init__(self):
+super(Foo, self).__init__()
+
+def forward(self, X):
+return X + 1;
+
+def backward(self, dY):
+return dY
+
+with mx.autograd.record():
+X = mx.np.zeros((3, 4))
+#X.attach_grad()  # uncommenting this line works
+for i in range(5):
+f = Foo()
+X = f(X)
+X.wait_to_read()
+
+
+@with_seed()
+@pytest.mark.garbage_expected
 def test_get_symbol():
 x = mx.nd.ones((1,))
 x.attach_grad()



[incubator-mxnet] branch master updated: add support for np.ndarray in autograd.function (#18790)

2020-07-25 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 98b3f73  add support for np.ndarray in autograd.function (#18790)
98b3f73 is described below

commit 98b3f73bd0f30034e3f6848eb75d38c30c8b60b4
Author: Sheng Zha 
AuthorDate: Sat Jul 25 16:19:36 2020 -0700

add support for np.ndarray in autograd.function (#18790)
---
 python/mxnet/autograd.py   | 14 +---
 tests/python/unittest/test_autograd.py | 61 ++
 2 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/python/mxnet/autograd.py b/python/mxnet/autograd.py
index f968275..aac7cbc 100644
--- a/python/mxnet/autograd.py
+++ b/python/mxnet/autograd.py
@@ -28,6 +28,7 @@ from .base import NDArrayHandle, c_array, c_handle_array, 
c_array_buf, MXCallbac
 from .ndarray import NDArray, _ndarray_cls
 from .ndarray import _GRAD_REQ_MAP
 from .symbol import Symbol
+from .util import is_np_array
 
 
 def set_recording(is_recording): #pylint: disable=redefined-outer-name
@@ -448,25 +449,30 @@ class Function(object):
 outputs = (outputs,)
 
 key = Function._registry.inc()
+if is_np_array():
+from .numpy import ndarray
+array_cls = ndarray
+else:
+array_cls = NDArray
 
 def backward_entry(num_ograds, num_igrads, ptrs, reqs, is_train, _):
 """entry point for backward."""
 # pylint: disable=W0613
 try:
-output_grads = [NDArray(ctypes.cast(i, NDArrayHandle), 
writable=False) \
+output_grads = [array_cls(ctypes.cast(i, NDArrayHandle), 
writable=False) \
 for i in ptrs[:num_ograds]]
-input_grads = [NDArray(ctypes.cast(i, NDArrayHandle), 
writable=True) \
+input_grads = [array_cls(ctypes.cast(i, NDArrayHandle), 
writable=True) \
for i in ptrs[num_ograds:num_ograds+num_igrads]]
 reqs = [reqs[i] for i in range(num_igrads)]
 rets = self.backward(*output_grads)
-if isinstance(rets, NDArray):
+if isinstance(rets, array_cls):
 rets = (rets,)
 assert len(rets) == len(input_grads), \
 "%s.backward must return exactly the same number " \
 "of NDArrays as the number of NDArrays arguments to 
forward." \
 "Expecting %d got %d"%(self.__class__.name, 
len(input_grads), len(rets))
 for igrad, ret, req in zip(input_grads, rets, reqs):
-assert isinstance(ret, NDArray), \
+assert isinstance(ret, array_cls), \
 "autograd.Function.backward must return NDArrays, not 
%s"%type(ret)
 if req == 0:  # null
 return True
diff --git a/tests/python/unittest/test_autograd.py 
b/tests/python/unittest/test_autograd.py
index 6a75eed..f9a7ecc 100644
--- a/tests/python/unittest/test_autograd.py
+++ b/tests/python/unittest/test_autograd.py
@@ -407,6 +407,67 @@ def test_function1():
 
 @with_seed()
 @pytest.mark.garbage_expected
+@use_np
+def test_np_function():
+class func(Function):
+def forward(self, x, y):
+m = x / y
+n = x * y
+self.save_for_backward(x, y)
+return m, n
+
+def backward(self, dm, dn):
+x, y = self.saved_tensors
+dx = dm/y + dn*y
+dy = dn*x - dm * x / y / y
+return dx, dy
+
+f = func()
+x = mx.np.random.uniform(size=(10,))
+x.attach_grad()
+y = mx.np.random.uniform(size=(10,))
+y.attach_grad()
+with record():
+m, n = f(x, y)
+backward([m, n])
+
+dx1 = x.grad.asnumpy()
+dy1 = y.grad.asnumpy()
+
+with record():
+backward([x/y, x*y])
+
+# Non-zero atol required, as exposed by seed 630179191
+atol = 1e-6
+assert_almost_equal(x.grad.asnumpy(), dx1, atol=atol)
+assert_almost_equal(y.grad.asnumpy(), dy1, atol=atol)
+
+
+@with_seed()
+@pytest.mark.garbage_expected
+@use_np
+def test_np_function1():
+class Foo(mx.autograd.Function):
+def __init__(self):
+super(Foo, self).__init__()
+
+def forward(self, X):
+return X + 1;
+
+def backward(self, dY):
+return dY
+
+with mx.autograd.record():
+X = mx.np.zeros((3, 4))
+#X.attach_grad()  # uncommenting this line works
+for i in range(5):
+f = Foo()
+X = f(X)
+X.wait_to_read()
+
+
+@with_seed()
+@pytest.mark.garbage_expected
 def test_get_symbol():
 x = mx.nd.ones((1,))
 x.attach_grad()



[incubator-mxnet] branch master updated (d1b0a09 -> 6462887)

2020-07-03 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d1b0a09  [numpy] FFI flip, rollaxis, stack (#18614)
 add 6462887  [numpy] Fix less/greater bug with scalar input (#18642)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  5 ++--
 .../numpy/np_elemwise_broadcast_logic_op.cc| 34 ++
 .../python/unittest/test_numpy_interoperability.py |  8 +
 3 files changed, 39 insertions(+), 8 deletions(-)



[incubator-mxnet] branch master updated: [numpy] Fix less/greater bug with scalar input (#18642)

2020-07-03 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6462887  [numpy] Fix less/greater bug with scalar input (#18642)
6462887 is described below

commit 646288716cbba482d4ede0fb4f6141b2ea505090
Author: Yiyan66 <57363390+yiya...@users.noreply.github.com>
AuthorDate: Sat Jul 4 09:13:41 2020 +0800

[numpy] Fix less/greater bug with scalar input (#18642)

* fix ffi

* fix less/greater error

* back

* submodule

* fixed

Co-authored-by: Ubuntu 
---
 python/mxnet/ndarray/numpy/_op.py  |  5 ++--
 .../numpy/np_elemwise_broadcast_logic_op.cc| 34 ++
 .../python/unittest/test_numpy_interoperability.py |  8 +
 3 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/python/mxnet/ndarray/numpy/_op.py 
b/python/mxnet/ndarray/numpy/_op.py
index 45f885a..91fea5f 100644
--- a/python/mxnet/ndarray/numpy/_op.py
+++ b/python/mxnet/ndarray/numpy/_op.py
@@ -7171,8 +7171,9 @@ def greater(x1, x2, out=None):
 >>> np.greater(1, np.ones(1))
 array([False])
 """
-return _ufunc_helper(x1, x2, _npi.greater, _np.greater, 
_npi.greater_scalar,
- _npi.less_scalar, out)
+if isinstance(x1, numeric_types) and isinstance(x2, numeric_types):
+return _np.greater(x1, x2, out=out)
+return _api_internal.greater(x1, x2, out)
 
 
 @set_module('mxnet.ndarray.numpy')
diff --git a/src/api/operator/numpy/np_elemwise_broadcast_logic_op.cc 
b/src/api/operator/numpy/np_elemwise_broadcast_logic_op.cc
index f0ca408..2248433 100644
--- a/src/api/operator/numpy/np_elemwise_broadcast_logic_op.cc
+++ b/src/api/operator/numpy/np_elemwise_broadcast_logic_op.cc
@@ -44,13 +44,35 @@ MXNET_REGISTER_API("_npi.not_equal")
   UFuncHelper(args, ret, op, op_scalar, nullptr);
 });
 
+void SetUFuncHelper(runtime::MXNetArgs args, runtime::MXNetRetValue* ret,
+ const nnvm::Op* op, const nnvm::Op* op_scalar,
+ const nnvm::Op* op_rscalar) {
+  if (args[0].type_code() == kNDArrayHandle &&
+  args[1].type_code() == kNDArrayHandle) {
+UFuncHelper(args, ret, op, nullptr, nullptr);
+  } else if (args[0].type_code() == kNDArrayHandle) {
+UFuncHelper(args, ret, nullptr, op_scalar, nullptr);
+  } else {
+UFuncHelper(args, ret, nullptr, nullptr, op_rscalar);
+  }
+}
+
+MXNET_REGISTER_API("_npi.greater")
+.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) {
+  using namespace runtime;
+  const nnvm::Op* op = Op::Get("_npi_greater");
+  const nnvm::Op* op_scalar = Op::Get("_npi_greater_scalar");
+  const nnvm::Op* op_rscalar = Op::Get("_npi_less_scalar");
+  SetUFuncHelper(args, ret, op, op_scalar, op_rscalar);
+});
+
 MXNET_REGISTER_API("_npi.less")
 .set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) {
   using namespace runtime;
   const nnvm::Op* op = Op::Get("_npi_less");
   const nnvm::Op* op_scalar = Op::Get("_npi_less_scalar");
-  const nnvm::Op* op_rscalar = Op::Get("_npi_less_scalar");
-  UFuncHelper(args, ret, op, op_scalar, op_rscalar);
+  const nnvm::Op* op_rscalar = Op::Get("_npi_greater_scalar");
+  SetUFuncHelper(args, ret, op, op_scalar, op_rscalar);
 });
 
 MXNET_REGISTER_API("_npi.greater_equal")
@@ -58,8 +80,8 @@ MXNET_REGISTER_API("_npi.greater_equal")
   using namespace runtime;
   const nnvm::Op* op = Op::Get("_npi_greater_equal");
   const nnvm::Op* op_scalar = Op::Get("_npi_greater_equal_scalar");
-  const nnvm::Op* op_rscalar = Op::Get("_npi_greater_equal_scalar");
-  UFuncHelper(args, ret, op, op_scalar, op_rscalar);
+  const nnvm::Op* op_rscalar = Op::Get("_npi_less_equal_scalar");
+  SetUFuncHelper(args, ret, op, op_scalar, op_rscalar);
 });
 
 MXNET_REGISTER_API("_npi.less_equal")
@@ -67,8 +89,8 @@ MXNET_REGISTER_API("_npi.less_equal")
   using namespace runtime;
   const nnvm::Op* op = Op::Get("_npi_less_equal");
   const nnvm::Op* op_scalar = Op::Get("_npi_less_equal_scalar");
-  const nnvm::Op* op_rscalar = Op::Get("_npi_less_equal_scalar");
-  UFuncHelper(args, ret, op, op_scalar, op_rscalar);
+  const nnvm::Op* op_rscalar = Op::Get("_npi_greater_equal_scalar");
+  SetUFuncHelper(args, ret, op, op_scalar, op_rscalar);
 });
 
 }  // namespace mxnet
diff --git a/tests/python/unittest/test_numpy_interoperability.py 
b/tests/python/unittest/test_numpy_interoperability.py
index 6a2845e..8b50fc4 100644
--- a/tests/python/unittest/test_numpy_interoperability.py
+++ b/tests/python/unittest/test_numpy_interoperability.py
@@ -1947,6 +1947,8 @@ def _add_workload

[incubator-mxnet] branch master updated (638622f -> 2158106)

2020-06-30 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 638622f  Improve performance of broadcast_axis on CPU (#17882)
 add 2158106  [Numpy] FFI: tril_indices (#18546)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  2 +-
 python/mxnet/numpy/multiarray.py   |  2 +-
 src/api/operator/numpy/np_matrix_op.cc | 24 
 tests/python/unittest/test_numpy_op.py |  2 +-
 4 files changed, 27 insertions(+), 3 deletions(-)



[incubator-mxnet] branch master updated (638622f -> 2158106)

2020-06-30 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 638622f  Improve performance of broadcast_axis on CPU (#17882)
 add 2158106  [Numpy] FFI: tril_indices (#18546)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  2 +-
 python/mxnet/numpy/multiarray.py   |  2 +-
 src/api/operator/numpy/np_matrix_op.cc | 24 
 tests/python/unittest/test_numpy_op.py |  2 +-
 4 files changed, 27 insertions(+), 3 deletions(-)



[incubator-mxnet] branch master updated (028d01d -> cf3984b)

2020-06-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 028d01d  Drop list support in optimize_for (#18483)
 add cf3984b  [numpy] fix op repeat with list input (#18371)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |   8 +-
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/numpy/np_matrix_op.cc |  22 --
 .../numpy/{np_diff_op.cc => np_repeat_op.cc}   |  31 +--
 src/operator/numpy/np_repeat_op-inl.h  | 221 +
 .../np_norm_backward.cc => np_repeat_op.cc}|  29 ++-
 .../np_norm_backward.cu => np_repeat_op.cu}|  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +-
 tests/python/unittest/test_numpy_op.py |   2 +
 9 files changed, 283 insertions(+), 60 deletions(-)
 copy src/api/operator/numpy/{np_diff_op.cc => np_repeat_op.cc} (73%)
 create mode 100644 src/operator/numpy/np_repeat_op-inl.h
 copy src/operator/numpy/{linalg/np_norm_backward.cc => np_repeat_op.cc} (59%)
 copy src/operator/numpy/{linalg/np_norm_backward.cu => np_repeat_op.cu} (75%)



[incubator-mxnet] branch master updated (028d01d -> cf3984b)

2020-06-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 028d01d  Drop list support in optimize_for (#18483)
 add cf3984b  [numpy] fix op repeat with list input (#18371)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |   8 +-
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/numpy/np_matrix_op.cc |  22 --
 .../numpy/{np_diff_op.cc => np_repeat_op.cc}   |  31 +--
 src/operator/numpy/np_repeat_op-inl.h  | 221 +
 .../np_norm_backward.cc => np_repeat_op.cc}|  29 ++-
 .../np_norm_backward.cu => np_repeat_op.cu}|  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +-
 tests/python/unittest/test_numpy_op.py |   2 +
 9 files changed, 283 insertions(+), 60 deletions(-)
 copy src/api/operator/numpy/{np_diff_op.cc => np_repeat_op.cc} (73%)
 create mode 100644 src/operator/numpy/np_repeat_op-inl.h
 copy src/operator/numpy/{linalg/np_norm_backward.cc => np_repeat_op.cc} (59%)
 copy src/operator/numpy/{linalg/np_norm_backward.cu => np_repeat_op.cu} (75%)



[incubator-mxnet] branch master updated (028d01d -> cf3984b)

2020-06-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 028d01d  Drop list support in optimize_for (#18483)
 add cf3984b  [numpy] fix op repeat with list input (#18371)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |   8 +-
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/numpy/np_matrix_op.cc |  22 --
 .../numpy/{np_diff_op.cc => np_repeat_op.cc}   |  31 +--
 src/operator/numpy/np_repeat_op-inl.h  | 221 +
 .../np_norm_backward.cc => np_repeat_op.cc}|  29 ++-
 .../np_norm_backward.cu => np_repeat_op.cu}|  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +-
 tests/python/unittest/test_numpy_op.py |   2 +
 9 files changed, 283 insertions(+), 60 deletions(-)
 copy src/api/operator/numpy/{np_diff_op.cc => np_repeat_op.cc} (73%)
 create mode 100644 src/operator/numpy/np_repeat_op-inl.h
 copy src/operator/numpy/{linalg/np_norm_backward.cc => np_repeat_op.cc} (59%)
 copy src/operator/numpy/{linalg/np_norm_backward.cu => np_repeat_op.cu} (75%)



[incubator-mxnet] branch master updated (028d01d -> cf3984b)

2020-06-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 028d01d  Drop list support in optimize_for (#18483)
 add cf3984b  [numpy] fix op repeat with list input (#18371)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |   8 +-
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/numpy/np_matrix_op.cc |  22 --
 .../numpy/{np_diff_op.cc => np_repeat_op.cc}   |  31 +--
 src/operator/numpy/np_repeat_op-inl.h  | 221 +
 .../np_norm_backward.cc => np_repeat_op.cc}|  29 ++-
 .../np_norm_backward.cu => np_repeat_op.cu}|  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +-
 tests/python/unittest/test_numpy_op.py |   2 +
 9 files changed, 283 insertions(+), 60 deletions(-)
 copy src/api/operator/numpy/{np_diff_op.cc => np_repeat_op.cc} (73%)
 create mode 100644 src/operator/numpy/np_repeat_op-inl.h
 copy src/operator/numpy/{linalg/np_norm_backward.cc => np_repeat_op.cc} (59%)
 copy src/operator/numpy/{linalg/np_norm_backward.cu => np_repeat_op.cu} (75%)



[incubator-mxnet] branch master updated (028d01d -> cf3984b)

2020-06-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 028d01d  Drop list support in optimize_for (#18483)
 add cf3984b  [numpy] fix op repeat with list input (#18371)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |   8 +-
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/numpy/np_matrix_op.cc |  22 --
 .../numpy/{np_diff_op.cc => np_repeat_op.cc}   |  31 +--
 src/operator/numpy/np_repeat_op-inl.h  | 221 +
 .../np_norm_backward.cc => np_repeat_op.cc}|  29 ++-
 .../np_norm_backward.cu => np_repeat_op.cu}|  14 +-
 .../python/unittest/test_numpy_interoperability.py |   8 +-
 tests/python/unittest/test_numpy_op.py |   2 +
 9 files changed, 283 insertions(+), 60 deletions(-)
 copy src/api/operator/numpy/{np_diff_op.cc => np_repeat_op.cc} (73%)
 create mode 100644 src/operator/numpy/np_repeat_op-inl.h
 copy src/operator/numpy/{linalg/np_norm_backward.cc => np_repeat_op.cc} (59%)
 copy src/operator/numpy/{linalg/np_norm_backward.cu => np_repeat_op.cu} (75%)



[incubator-mxnet] branch master updated: fix_np_where (#18451)

2020-06-03 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 8572506  fix_np_where (#18451)
8572506 is described below

commit 85725066767255090b57aec7f3b03628656afbf0
Author: Minghao Liu <40382964+tomm...@users.noreply.github.com>
AuthorDate: Wed Jun 3 15:16:24 2020 +0800

fix_np_where (#18451)
---
 src/api/operator/numpy/np_where_op.cc  | 2 +-
 tests/python/unittest/test_numpy_op.py | 7 +++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/api/operator/numpy/np_where_op.cc 
b/src/api/operator/numpy/np_where_op.cc
index a2ed14b..aca4e07 100644
--- a/src/api/operator/numpy/np_where_op.cc
+++ b/src/api/operator/numpy/np_where_op.cc
@@ -76,7 +76,7 @@ inline static void _npi_where_scalar2(runtime::MXNetArgs args,
   op::NumpyWhereScalar2Param param;
   nnvm::NodeAttrs attrs;
   param.x = args[1].operator double();
-  param.x = args[2].operator double();
+  param.y = args[2].operator double();
   attrs.op = op;
   attrs.parsed = param;
   SetAttrDict();
diff --git a/tests/python/unittest/test_numpy_op.py 
b/tests/python/unittest/test_numpy_op.py
index 2247700..441c727 100644
--- a/tests/python/unittest/test_numpy_op.py
+++ b/tests/python/unittest/test_numpy_op.py
@@ -9156,6 +9156,13 @@ def test_np_where():
 same(ret.asnumpy(), _np.where(cond.asnumpy(), x.asnumpy(), 1))
 ret_rscalar.backward()
 same(x.grad.asnumpy(), 
collapse_sum_like(_np.broadcast_to(cond.asnumpy(), ret.shape), shape_pair[1]))
+
+# check both scalar case
+x = _np.random.randint(0, 100)
+y = _np.random.randint(0, 100)
+mx_out = np.where(cond, x, y)
+np_out = _np.where(cond, x, y)
+same(mx_out, np_out)
 
 
 @with_seed()



[incubator-mxnet] branch master updated: fix mixed type binary logic operators (#18427)

2020-06-02 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c59a325  fix mixed type binary logic operators (#18427)
c59a325 is described below

commit c59a3255346ebe9bc0729c5a702fc99624ed2374
Author: Yijun Chen 
AuthorDate: Tue Jun 2 15:31:16 2020 +0800

fix mixed type binary logic operators (#18427)
---
 src/operator/mshadow_op.h  |  6 ++--
 src/operator/mxnet_op.h|  7 +
 .../numpy/np_elemwise_broadcast_logic_op.cc|  2 --
 src/operator/tensor/elemwise_binary_broadcast_op.h | 34 ++
 src/operator/tensor/elemwise_binary_op.h   |  4 ++-
 tests/python/unittest/test_numpy_op.py |  9 ++
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/src/operator/mshadow_op.h b/src/operator/mshadow_op.h
index 4cbb17d..9069af9 100644
--- a/src/operator/mshadow_op.h
+++ b/src/operator/mshadow_op.h
@@ -114,8 +114,10 @@ using std::is_integral;
 
 #define MXNET_BINARY_LOGIC_OP_NC(name, expr) \
   struct name : public mxnet_op::tunable  { \
-template \
-MSHADOW_XINLINE static bool Map(DType a, DType b) { \
+template \
+MSHADOW_XINLINE static bool Map(DType lhs, EType rhs) { \
+  double a = static_cast(lhs); \
+  double b = static_cast(rhs); \
   return (expr); \
 } \
   }
diff --git a/src/operator/mxnet_op.h b/src/operator/mxnet_op.h
index 3f1c804..bc8c0af 100644
--- a/src/operator/mxnet_op.h
+++ b/src/operator/mxnet_op.h
@@ -860,6 +860,13 @@ struct op_with_req {
 KERNEL_ASSIGN(out[i], req, OP::Map(in[i], value));
   }
 
+  /*! \brief input is two tensors with different type and with a boolean 
output tensor */
+  template::value, 
int>::type = 0>
+  MSHADOW_XINLINE static void Map(index_t i, bool *out, const LType *lhs, 
const RType *rhs) {
+KERNEL_ASSIGN(out[i], req, OP::Map(lhs[i], rhs[i]));
+  }
+
 #ifndef _WIN32
   /*! \brief inputs are two tensors with a half_t output tensor */
   templatesize(), 2U);
   CHECK_EQ(out_attrs->size(), 1U);
   if (in_attrs->at(0) == -1 && in_attrs->at(1) == -1) return false;
-  TYPE_ASSIGN_CHECK(*in_attrs, 0, in_attrs->at(1));
-  TYPE_ASSIGN_CHECK(*in_attrs, 1, in_attrs->at(0));
   TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kBool);
   return true;
 }
diff --git a/src/operator/tensor/elemwise_binary_broadcast_op.h 
b/src/operator/tensor/elemwise_binary_broadcast_op.h
index ffd0f12..6f6711e 100644
--- a/src/operator/tensor/elemwise_binary_broadcast_op.h
+++ b/src/operator/tensor/elemwise_binary_broadcast_op.h
@@ -209,6 +209,25 @@ struct binary_broadcast_kernel {
   }
 
   /*! \brief Map function for binary_broadcast_kernel */
+  template
+  MSHADOW_XINLINE static void Map(index_t base, index_t length, OpReqType req,
+  const Shape  , const Shape 
 ,
+  const Shape  , LType *lhs, 
RType *rhs,
+  OType *out) {
+Shape  coord = unravel(base, oshape);
+auto lidx = static_cast(dot(coord, lstride));
+auto ridx = static_cast(dot(coord, rstride));
+KERNEL_ASSIGN(out[base], req, OP::Map(lhs[lidx], rhs[ridx]));
+// starts from 1 to avoid extra inc at end of loop
+for (index_t i = 1; i < length; ++i) {
+  inc(, oshape, , lstride, , rstride);
+  // When tuning, don't actually run the op, since it's not going to be 
tuned against
+  // the actual op we'll eventually be using
+  KERNEL_ASSIGN(out[base + i], req, OP::Map(lhs[lidx], rhs[ridx]));
+}
+  }
+
+  /*! \brief Map function for binary_broadcast_kernel */
   template
   MSHADOW_XINLINE static void Map(index_t base, index_t length, OpReqType req,
   const Shape  , const Shape 
 ,
@@ -430,23 +449,28 @@ void BinaryBroadcastComputeLogic(const nnvm::NodeAttrs& 
attrs,
  const std::vector& outputs) {
   if (outputs[0].shape_.Size() == 0U) return;
   mxnet::TShape new_lshape, new_rshape, new_oshape;
-  int ndim = BinaryBroadcastShapeCompact(inputs[0].shape_, inputs[1].shape_, 
outputs[0].shape_,
+  const TBlob& lhs = inputs[0];
+  const TBlob& rhs = inputs[1];
+  const TBlob& out = outputs[0];
+  int ndim = BinaryBroadcastShapeCompact(lhs.shape_, rhs.shape_, out.shape_,
  _lshape, _rshape, 
_oshape);
   if (!ndim) {
 ElemwiseBinaryOp::ComputeLogic(attrs, ctx, inputs, req, outputs);
   } else {
 if (req[0] == kNullOp) return;
 mshadow::Stream *s = ctx.get_stream();
-MSHADOW_TYPE_SWITCH_WITH_BOOL(inputs[0].type_flag_, DType, {
-BROADCAST_NDIM_SWITCH(ndim, NDim, {
+MSHADOW_TYPE_SWITCH_WITH_BOOL(lhs.type_flag_, DType, {
+  MSHADOW_TYPE_SWITCH_WITH_BOOL(rhs.type_flag_,

[incubator-mxnet] branch master updated: fix mixed type binary logic operators (#18427)

2020-06-02 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c59a325  fix mixed type binary logic operators (#18427)
c59a325 is described below

commit c59a3255346ebe9bc0729c5a702fc99624ed2374
Author: Yijun Chen 
AuthorDate: Tue Jun 2 15:31:16 2020 +0800

fix mixed type binary logic operators (#18427)
---
 src/operator/mshadow_op.h  |  6 ++--
 src/operator/mxnet_op.h|  7 +
 .../numpy/np_elemwise_broadcast_logic_op.cc|  2 --
 src/operator/tensor/elemwise_binary_broadcast_op.h | 34 ++
 src/operator/tensor/elemwise_binary_op.h   |  4 ++-
 tests/python/unittest/test_numpy_op.py |  9 ++
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/src/operator/mshadow_op.h b/src/operator/mshadow_op.h
index 4cbb17d..9069af9 100644
--- a/src/operator/mshadow_op.h
+++ b/src/operator/mshadow_op.h
@@ -114,8 +114,10 @@ using std::is_integral;
 
 #define MXNET_BINARY_LOGIC_OP_NC(name, expr) \
   struct name : public mxnet_op::tunable  { \
-template \
-MSHADOW_XINLINE static bool Map(DType a, DType b) { \
+template \
+MSHADOW_XINLINE static bool Map(DType lhs, EType rhs) { \
+  double a = static_cast(lhs); \
+  double b = static_cast(rhs); \
   return (expr); \
 } \
   }
diff --git a/src/operator/mxnet_op.h b/src/operator/mxnet_op.h
index 3f1c804..bc8c0af 100644
--- a/src/operator/mxnet_op.h
+++ b/src/operator/mxnet_op.h
@@ -860,6 +860,13 @@ struct op_with_req {
 KERNEL_ASSIGN(out[i], req, OP::Map(in[i], value));
   }
 
+  /*! \brief input is two tensors with different type and with a boolean 
output tensor */
+  template::value, 
int>::type = 0>
+  MSHADOW_XINLINE static void Map(index_t i, bool *out, const LType *lhs, 
const RType *rhs) {
+KERNEL_ASSIGN(out[i], req, OP::Map(lhs[i], rhs[i]));
+  }
+
 #ifndef _WIN32
   /*! \brief inputs are two tensors with a half_t output tensor */
   templatesize(), 2U);
   CHECK_EQ(out_attrs->size(), 1U);
   if (in_attrs->at(0) == -1 && in_attrs->at(1) == -1) return false;
-  TYPE_ASSIGN_CHECK(*in_attrs, 0, in_attrs->at(1));
-  TYPE_ASSIGN_CHECK(*in_attrs, 1, in_attrs->at(0));
   TYPE_ASSIGN_CHECK(*out_attrs, 0, mshadow::kBool);
   return true;
 }
diff --git a/src/operator/tensor/elemwise_binary_broadcast_op.h 
b/src/operator/tensor/elemwise_binary_broadcast_op.h
index ffd0f12..6f6711e 100644
--- a/src/operator/tensor/elemwise_binary_broadcast_op.h
+++ b/src/operator/tensor/elemwise_binary_broadcast_op.h
@@ -209,6 +209,25 @@ struct binary_broadcast_kernel {
   }
 
   /*! \brief Map function for binary_broadcast_kernel */
+  template
+  MSHADOW_XINLINE static void Map(index_t base, index_t length, OpReqType req,
+  const Shape  , const Shape 
 ,
+  const Shape  , LType *lhs, 
RType *rhs,
+  OType *out) {
+Shape  coord = unravel(base, oshape);
+auto lidx = static_cast(dot(coord, lstride));
+auto ridx = static_cast(dot(coord, rstride));
+KERNEL_ASSIGN(out[base], req, OP::Map(lhs[lidx], rhs[ridx]));
+// starts from 1 to avoid extra inc at end of loop
+for (index_t i = 1; i < length; ++i) {
+  inc(, oshape, , lstride, , rstride);
+  // When tuning, don't actually run the op, since it's not going to be 
tuned against
+  // the actual op we'll eventually be using
+  KERNEL_ASSIGN(out[base + i], req, OP::Map(lhs[lidx], rhs[ridx]));
+}
+  }
+
+  /*! \brief Map function for binary_broadcast_kernel */
   template
   MSHADOW_XINLINE static void Map(index_t base, index_t length, OpReqType req,
   const Shape  , const Shape 
 ,
@@ -430,23 +449,28 @@ void BinaryBroadcastComputeLogic(const nnvm::NodeAttrs& 
attrs,
  const std::vector& outputs) {
   if (outputs[0].shape_.Size() == 0U) return;
   mxnet::TShape new_lshape, new_rshape, new_oshape;
-  int ndim = BinaryBroadcastShapeCompact(inputs[0].shape_, inputs[1].shape_, 
outputs[0].shape_,
+  const TBlob& lhs = inputs[0];
+  const TBlob& rhs = inputs[1];
+  const TBlob& out = outputs[0];
+  int ndim = BinaryBroadcastShapeCompact(lhs.shape_, rhs.shape_, out.shape_,
  _lshape, _rshape, 
_oshape);
   if (!ndim) {
 ElemwiseBinaryOp::ComputeLogic(attrs, ctx, inputs, req, outputs);
   } else {
 if (req[0] == kNullOp) return;
 mshadow::Stream *s = ctx.get_stream();
-MSHADOW_TYPE_SWITCH_WITH_BOOL(inputs[0].type_flag_, DType, {
-BROADCAST_NDIM_SWITCH(ndim, NDim, {
+MSHADOW_TYPE_SWITCH_WITH_BOOL(lhs.type_flag_, DType, {
+  MSHADOW_TYPE_SWITCH_WITH_BOOL(rhs.type_flag_,

[incubator-mxnet] branch master updated: Add docs about default dtype (#18399)

2020-05-25 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 3efacd2  Add docs about default dtype (#18399)
3efacd2 is described below

commit 3efacd27f75e38e06151675407b0f17e3c1891a5
Author: JiangZhaoh <54654391+jiangzh...@users.noreply.github.com>
AuthorDate: Tue May 26 00:45:49 2020 +0800

Add docs about default dtype (#18399)

* add doc about default dtype

* fix sanity error
---
 python/mxnet/ndarray/numpy/_op.py |  1 +
 python/mxnet/numpy/multiarray.py  | 30 +-
 python/mxnet/util.py  |  5 +
 3 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/python/mxnet/ndarray/numpy/_op.py 
b/python/mxnet/ndarray/numpy/_op.py
index 0b6a67a..e7b0921 100644
--- a/python/mxnet/ndarray/numpy/_op.py
+++ b/python/mxnet/ndarray/numpy/_op.py
@@ -9595,6 +9595,7 @@ def sum(a, axis=None, dtype=None, out=None, 
keepdims=None, initial=None, where=N
 - Input type does not support Python native iterables(list, tuple, ...).
 - "out" param: cannot perform auto type cast. out ndarray's dtype must be 
the same as the expected output.
 - "initial" param is not supported yet. Please use ``None`` as input or 
skip it.
+- The default type is float32.
 
 Examples
 
diff --git a/python/mxnet/numpy/multiarray.py b/python/mxnet/numpy/multiarray.py
index ade0be7..9073f3f 100644
--- a/python/mxnet/numpy/multiarray.py
+++ b/python/mxnet/numpy/multiarray.py
@@ -2352,8 +2352,10 @@ def array(object, dtype=None, ctx=None):
 __array__ method returns an array, or any (nested) sequence.
 dtype : data-type, optional
 The desired data-type for the array.
-When npx.is_np_default_dtype() returns False, default dtype is float32;
-When npx.is_np_default_dtype() returns True, default dtype is float64.
+The default dtype is ``object.dtype`` if `object` is an `ndarray`, 
`float32` otherwise.
+Default dtype can be set to be consistent with offical numpy by 
`npx.set_np(dtype=True)`.
+- When npx.is_np_default_dtype() returns False, default dtype is 
float32;
+- When npx.is_np_default_dtype() returns True, default dtype is 
float64.
 ctx : device context, optional
 Device context on which the memory is allocated. Default is
 `mxnet.context.current_context()`.
@@ -2375,6 +2377,13 @@ def array(object, dtype=None, ctx=None):
 >>> np.array([[1, 0], [0, 1]], dtype=bool)
 array([[ True, False],
[False,  True]])
+
+>>> np.array([1, 2, 3]).dtype
+dtype('float32')
+
+>>> npx.set_np(dtype=True)
+>>> np.array([1, 2, 3]).dtype
+dtype('float64')
 """
 if ctx is None:
 ctx = current_context()
@@ -6024,8 +6033,10 @@ def arange(start, stop=None, step=1, dtype=None, 
ctx=None):
 step size is 1.  If `step` is specified as a position argument,
 `start` must also be given.
 dtype : dtype
-The type of the output array. The default is `float32` or 'float64',
-which depends on your current default dtype.
+The type of the output array.
+Default dtype can be set to be consistent with offical numpy by 
`npx.set_np(dtype=True)`.
+- When npx.is_np_default_dtype() returns False, default dtype is 
float32;
+- When npx.is_np_default_dtype() returns True, default dtype is int64.
 
 Returns
 ---
@@ -6050,6 +6061,12 @@ def arange(start, stop=None, step=1, dtype=None, 
ctx=None):
 
 >>> np.arange(3,7,2)
 array([3., 5.])
+
+>>> np.arange(3).dtype
+dtype('float32')
+>>> npx.set_np(dtype=True)
+>>> np.arange(3).dtype
+dtype('int64')
 """
 return _mx_nd_np.arange(start, stop, step, dtype, ctx)
 # pylint: enable=redefined-outer-name
@@ -7336,7 +7353,9 @@ def average(a, axis=None, weights=None, returned=False, 
out=None):
 - Does not guarantee the same behavior with numpy when given float16 dtype 
and overflow happens
 - Does not support complex dtype
 - The dtypes of a and weights must be the same
-- Integral a results in float32 or float64 returned dtype, which depends 
on your current default dtype
+- Integral a results in float32 or float64 returned dtype:
+  When npx.is_np_default_dtype() returns False, default dtype is float32,
+  When npx.is_np_default_dtype() returns True, default dtype is float64;
 
 Examples
 
@@ -11727,6 +11746,7 @@ def sum(a, axis=None, dtype=None, out=None, 
keepdims=None, initial=None, where=N
 - Input type does not support Python native iterables(list, tuple, ...).
 - "out" param: cannot perform auto type cast. out nd

[incubator-mxnet] branch master updated: [numpy] Fix mean, prod with input of empty array (#18286)

2020-05-24 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c2c5d4  [numpy] Fix mean, prod with input of empty array  (#18286)
9c2c5d4 is described below

commit 9c2c5d45a6c838bd8348ce43a6168cf77d5c7125
Author: Yiyan66 <57363390+yiya...@users.noreply.github.com>
AuthorDate: Mon May 25 00:14:58 2020 +0800

[numpy] Fix mean, prod with input of empty array  (#18286)

* prod

* mean

* nan

* sanity

* change kernel

* include

Co-authored-by: Ubuntu 
---
 src/operator/numpy/np_broadcast_reduce_op.h| 39 --
 .../python/unittest/test_numpy_interoperability.py |  2 ++
 2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/src/operator/numpy/np_broadcast_reduce_op.h 
b/src/operator/numpy/np_broadcast_reduce_op.h
index d10e32a..6b59ac0 100644
--- a/src/operator/numpy/np_broadcast_reduce_op.h
+++ b/src/operator/numpy/np_broadcast_reduce_op.h
@@ -275,6 +275,16 @@ inline bool NeedSafeAcc(int itype, int otype) {
   return safe_acc_hint && rule;
 }
 
+namespace mxnet_op {
+struct set_to_nan {
+  template
+  MSHADOW_XINLINE static void Map(index_t i, DType *out) {
+out[i] = DType(nanf(""));
+  }
+};
+
+}  // namespace mxnet_op
+
 void TVMOpReduce(const OpContext& ctx, const TBlob& input,
  const dmlc::optional>& axis,
  const TBlob& output, const OpReqType req, const std::string& 
reducer_name);
@@ -296,9 +306,32 @@ void NumpyReduceAxesCompute(const nnvm::NodeAttrs& attrs,
   if (outputs[0].shape_.Size() == 0) return;
   if (inputs[0].shape_.Size() == 0 && outputs[0].shape_.Size() != 0) {
 using namespace mxnet_op;
-MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-  Kernel::Launch(s, outputs[0].shape_.Size(), 
outputs[0].dptr());
-});
+if (normalize) {
+  LOG(WARNING) << "WARNING: Mean of empty slice.";
+  if (mxnet::common::is_float(outputs[0].type_flag_)) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, outputs[0].shape_.Size(),
+  outputs[0].dptr());
+});
+  } else {
+LOG(WARNING) << "WARNING: nan is outside the range of"<<
+"representable values of type 'int'";
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, outputs[0].shape_.Size(),
+outputs[0].dptr());
+});
+  }
+} else if (std::is_same::value) {
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+Kernel::Launch(s, outputs[0].shape_.Size(),
+  outputs[0].dptr());
+  });
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+Kernel::Launch(s, outputs[0].shape_.Size(),
+ outputs[0].dptr());
+  });
+}
 return;
   }
   CHECK_NE(req[0], kWriteInplace) << "Reduce does not support write in-place";
diff --git a/tests/python/unittest/test_numpy_interoperability.py 
b/tests/python/unittest/test_numpy_interoperability.py
index 342372c..0060b73 100644
--- a/tests/python/unittest/test_numpy_interoperability.py
+++ b/tests/python/unittest/test_numpy_interoperability.py
@@ -1105,6 +1105,7 @@ def _add_workload_mean(array_pool):
 OpArgMngr.add_workload('mean', array_pool['4x1'])
 OpArgMngr.add_workload('mean', array_pool['4x1'], axis=0, keepdims=True)
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]))
+OpArgMngr.add_workload('mean', np.array([]).reshape(2,0,0))
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]), axis=0)
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]), axis=1)
 
@@ -1139,6 +1140,7 @@ def _add_workload_atleast_nd():
 
 def _add_workload_prod(array_pool):
 OpArgMngr.add_workload('prod', array_pool['4x1'])
+OpArgMngr.add_workload('prod', np.array([]).reshape(2,0,0))
 
 
 def _add_workload_product(array_pool):



[incubator-mxnet] branch master updated: [numpy] Fix mean, prod with input of empty array (#18286)

2020-05-24 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9c2c5d4  [numpy] Fix mean, prod with input of empty array  (#18286)
9c2c5d4 is described below

commit 9c2c5d45a6c838bd8348ce43a6168cf77d5c7125
Author: Yiyan66 <57363390+yiya...@users.noreply.github.com>
AuthorDate: Mon May 25 00:14:58 2020 +0800

[numpy] Fix mean, prod with input of empty array  (#18286)

* prod

* mean

* nan

* sanity

* change kernel

* include

Co-authored-by: Ubuntu 
---
 src/operator/numpy/np_broadcast_reduce_op.h| 39 --
 .../python/unittest/test_numpy_interoperability.py |  2 ++
 2 files changed, 38 insertions(+), 3 deletions(-)

diff --git a/src/operator/numpy/np_broadcast_reduce_op.h 
b/src/operator/numpy/np_broadcast_reduce_op.h
index d10e32a..6b59ac0 100644
--- a/src/operator/numpy/np_broadcast_reduce_op.h
+++ b/src/operator/numpy/np_broadcast_reduce_op.h
@@ -275,6 +275,16 @@ inline bool NeedSafeAcc(int itype, int otype) {
   return safe_acc_hint && rule;
 }
 
+namespace mxnet_op {
+struct set_to_nan {
+  template
+  MSHADOW_XINLINE static void Map(index_t i, DType *out) {
+out[i] = DType(nanf(""));
+  }
+};
+
+}  // namespace mxnet_op
+
 void TVMOpReduce(const OpContext& ctx, const TBlob& input,
  const dmlc::optional>& axis,
  const TBlob& output, const OpReqType req, const std::string& 
reducer_name);
@@ -296,9 +306,32 @@ void NumpyReduceAxesCompute(const nnvm::NodeAttrs& attrs,
   if (outputs[0].shape_.Size() == 0) return;
   if (inputs[0].shape_.Size() == 0 && outputs[0].shape_.Size() != 0) {
 using namespace mxnet_op;
-MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
-  Kernel::Launch(s, outputs[0].shape_.Size(), 
outputs[0].dptr());
-});
+if (normalize) {
+  LOG(WARNING) << "WARNING: Mean of empty slice.";
+  if (mxnet::common::is_float(outputs[0].type_flag_)) {
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, outputs[0].shape_.Size(),
+  outputs[0].dptr());
+});
+  } else {
+LOG(WARNING) << "WARNING: nan is outside the range of"<<
+"representable values of type 'int'";
+MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+  Kernel::Launch(s, outputs[0].shape_.Size(),
+outputs[0].dptr());
+});
+  }
+} else if (std::is_same::value) {
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+Kernel::Launch(s, outputs[0].shape_.Size(),
+  outputs[0].dptr());
+  });
+} else {
+  MSHADOW_TYPE_SWITCH(outputs[0].type_flag_, DType, {
+Kernel::Launch(s, outputs[0].shape_.Size(),
+ outputs[0].dptr());
+  });
+}
 return;
   }
   CHECK_NE(req[0], kWriteInplace) << "Reduce does not support write in-place";
diff --git a/tests/python/unittest/test_numpy_interoperability.py 
b/tests/python/unittest/test_numpy_interoperability.py
index 342372c..0060b73 100644
--- a/tests/python/unittest/test_numpy_interoperability.py
+++ b/tests/python/unittest/test_numpy_interoperability.py
@@ -1105,6 +1105,7 @@ def _add_workload_mean(array_pool):
 OpArgMngr.add_workload('mean', array_pool['4x1'])
 OpArgMngr.add_workload('mean', array_pool['4x1'], axis=0, keepdims=True)
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]))
+OpArgMngr.add_workload('mean', np.array([]).reshape(2,0,0))
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]), axis=0)
 OpArgMngr.add_workload('mean', np.array([[1, 2, 3], [4, 5, 6]]), axis=1)
 
@@ -1139,6 +1140,7 @@ def _add_workload_atleast_nd():
 
 def _add_workload_prod(array_pool):
 OpArgMngr.add_workload('prod', array_pool['4x1'])
+OpArgMngr.add_workload('prod', np.array([]).reshape(2,0,0))
 
 
 def _add_workload_product(array_pool):



[incubator-mxnet] branch master updated: fix true_divide (#18393)

2020-05-23 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 0a3bdff  fix true_divide (#18393)
0a3bdff is described below

commit 0a3bdffeccfdc8db6213f8b3d9e18cf9a8e93b03
Author: Xingjian Shi 
AuthorDate: Sat May 23 18:43:34 2020 -0700

fix true_divide (#18393)
---
 src/operator/numpy/np_true_divide-inl.h | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/src/operator/numpy/np_true_divide-inl.h 
b/src/operator/numpy/np_true_divide-inl.h
index c6b8cdf..9edd795 100644
--- a/src/operator/numpy/np_true_divide-inl.h
+++ b/src/operator/numpy/np_true_divide-inl.h
@@ -125,7 +125,7 @@ void TrueDivideElemwiseCompute(const nnvm::NodeAttrs ,
 // Case when types of the 2 input tensors are different
 if (common::is_float(lhs.type_flag_) && common::is_float(rhs.type_flag_)) {
   // both lhs and rhs are float types, output type is the more precise one
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 } else if (common::is_float(lhs.type_flag_) || 
common::is_float(rhs.type_flag_)) {
   // one is float type, the other is integer type, the output type should 
be the same as float
   CHECK_EQ(out.type_flag_,
@@ -154,14 +154,14 @@ void TrueDivideElemwiseCompute(const nnvm::NodeAttrs 
,
   }
 } else {
   // lhs is integer type, rhs is integer type, output type should be float
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 }
 #else
 // Windows case: using temp space for casting the type
 // Case when types of the 2 input tensors are different
 if (common::is_float(lhs.type_flag_) && common::is_float(rhs.type_flag_)) {
   // both lhs and rhs are float types, output type is the more precise one
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 } else if (common::is_float(lhs.type_flag_) || 
common::is_float(rhs.type_flag_)) {
   // lhs is float type, rhs is integer type, the output type should be the 
same as lhs
   CHECK_EQ(out.type_flag_,
@@ -191,7 +191,7 @@ void TrueDivideElemwiseCompute(const nnvm::NodeAttrs ,
   }
 } else {
   // lhs is integer type, rhs is integer type, output type should be float
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 }
 #endif
   }
@@ -245,7 +245,7 @@ void TrueDivideBroadcastCompute(const nnvm::NodeAttrs& 
attrs,
   } else {
 if (common::is_float(lhs.type_flag_) && 
common::is_float(rhs.type_flag_)) {
   // lhs and rhs have different float types, the output is the more 
precise one
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 } else if (common::is_float(lhs.type_flag_) || 
common::is_float(rhs.type_flag_)) {
   // one of lhs and rhs is float, the output is the same type as the 
float one
   if (common::is_float(lhs.type_flag_)) {
@@ -273,7 +273,7 @@ void TrueDivideBroadcastCompute(const nnvm::NodeAttrs& 
attrs,
   }
 } else {
   // lhs and rhs have different integer types, the output is float type
-  LOG(ERROR) << "not implemented yet...";
+  LOG(FATAL) << "not implemented yet...";
 }
   }
 });
@@ -306,7 +306,7 @@ void TrueDivideBroadcastCompute(const nnvm::NodeAttrs& 
attrs,
 } else {
   if (common::is_float(lhs.type_flag_) && 
common::is_float(rhs.type_flag_)) {
 // lhs and rhs have different float types, the output is the more 
precise one
-LOG(ERROR) << "not implemented yet...";
+LOG(FATAL) << "not implemented yet...";
   } else if (common::is_float(lhs.type_flag_) || 
common::is_float(rhs.type_flag_)) {
 // one of lhs and rhs is float, the output is the same type as the 
float one
 TBlob temp_tblob;
@@ -337,7 +337,7 @@ void TrueDivideBroadcastCompute(const nnvm::NodeAttrs& 
attrs,
 }
   } else {
 // lhs and rhs have different integer types, the output is float type
-LOG(ERROR) << "not implemented yet...";
+LOG(FATAL) << "not implemented yet...";
   }
 }
 #endif



[incubator-mxnet] branch master updated (d9fc74e -> 48dea6e)

2020-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d9fc74e  Fix FInferShape for some ops to support partial type 
inference (#18348)
 add 48dea6e  Fix binary scalar dtype and add bool support (#18277)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/ufunc_helper.cc   |  64 -
 src/operator/contrib/gradient_multiplier_op.cc |   6 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  18 +--
 src/operator/numpy/np_elemwise_broadcast_op.cc |  30 ++---
 src/operator/numpy/np_elemwise_broadcast_op.h  |  10 --
 .../numpy/np_elemwise_broadcast_op_extended.cc |  68 +-
 .../numpy/np_elemwise_broadcast_op_extended_sec.cc |  44 +++
 src/operator/numpy/np_matrix_op-inl.h  |   4 +-
 src/operator/numpy/np_true_divide-inl.h|   3 +-
 src/operator/numpy/np_true_divide.cc   |  12 +-
 src/operator/tensor/elemwise_binary_scalar_op.h| 123 +
 .../tensor/elemwise_binary_scalar_op_basic.cc  |  49 +++
 .../tensor/elemwise_binary_scalar_op_extended.cc   |  36 ++---
 .../tensor/elemwise_binary_scalar_op_logic.cc  |   3 +-
 tests/python/unittest/test_higher_order_grad.py|   6 +
 tests/python/unittest/test_numpy_op.py | 145 +++--
 tests/python/unittest/test_symbol.py   |   2 -
 18 files changed, 397 insertions(+), 234 deletions(-)



[incubator-mxnet] branch master updated (d9fc74e -> 48dea6e)

2020-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from d9fc74e  Fix FInferShape for some ops to support partial type 
inference (#18348)
 add 48dea6e  Fix binary scalar dtype and add bool support (#18277)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/symbol/numpy/_symbol.py   |   8 +-
 src/api/operator/ufunc_helper.cc   |  64 -
 src/operator/contrib/gradient_multiplier_op.cc |   6 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  18 +--
 src/operator/numpy/np_elemwise_broadcast_op.cc |  30 ++---
 src/operator/numpy/np_elemwise_broadcast_op.h  |  10 --
 .../numpy/np_elemwise_broadcast_op_extended.cc |  68 +-
 .../numpy/np_elemwise_broadcast_op_extended_sec.cc |  44 +++
 src/operator/numpy/np_matrix_op-inl.h  |   4 +-
 src/operator/numpy/np_true_divide-inl.h|   3 +-
 src/operator/numpy/np_true_divide.cc   |  12 +-
 src/operator/tensor/elemwise_binary_scalar_op.h| 123 +
 .../tensor/elemwise_binary_scalar_op_basic.cc  |  49 +++
 .../tensor/elemwise_binary_scalar_op_extended.cc   |  36 ++---
 .../tensor/elemwise_binary_scalar_op_logic.cc  |   3 +-
 tests/python/unittest/test_higher_order_grad.py|   6 +
 tests/python/unittest/test_numpy_op.py | 145 +++--
 tests/python/unittest/test_symbol.py   |   2 -
 18 files changed, 397 insertions(+), 234 deletions(-)



[incubator-mxnet] branch master updated: New set default dtype (#18251)

2020-05-19 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b904d48  New set default dtype (#18251)
b904d48 is described below

commit b904d4838f6bd6a29171389c6e213ca03ec772b9
Author: JiangZhaoh <54654391+jiangzh...@users.noreply.github.com>
AuthorDate: Wed May 20 07:38:49 2020 +0800

New set default dtype (#18251)

* apply #17283

* fix issue #18060

* fix error

* remove redundant code

* fix CI error

* replace Flase to False

* add 'dtype=False' to set_np()

* fix doc

* default 'arange' default np dtype as int64
---
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 ++
 include/mxnet/imperative.h |  26 ++-
 python/mxnet/__init__.py   |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 147 -
 python/mxnet/ndarray/numpy/random.py   |  33 +--
 python/mxnet/numpy/multiarray.py   | 166 +--
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 +
 python/mxnet/symbol/numpy/random.py|  32 +--
 python/mxnet/symbol/numpy_extension/random.py  |   2 -
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 218 ++-
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  99 -
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 ++
 src/common/utils.h |  14 ++
 src/operator/numpy/linalg/np_gesvd.cc  |   1 +
 src/operator/numpy/np_broadcast_reduce_op.h|   1 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +++-
 src/operator/numpy/np_init_op.cu   |   6 +
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 ++-
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 -
 src/operator/tensor/init_op.h  |  52 +++--
 tests/python/unittest/test_numpy_default_dtype.py  | 230 +
 tests/python/unittest/test_numpy_op.py |  15 +-
 44 files changed, 1060 insertions(+), 279 deletions(-)

diff --git a/benchmark/python/einsum/benchmark_einsum.py 
b/benchmark/python/einsum/benchmark_einsum.py
index 6de8223..3d1a708 100644
--- a/benchmark/python/einsum/benchmark_einsum.py
+++ b/benchmark/python/einsum/benchmark_einsum.py
@@ -83,5 +83,5 @@ def test_np_einsum():
 
 
 if __name__ == "__main__":
-npx.set_np()
+npx.set_np(dtype=False)
 test_np_einsum()
diff --git a/benchmark/python/ffi/benchmark_ffi.py 
b/benchmark/python/ffi/benchmark_ffi.py
index d292377..df4ace2 100644
--- a/benchmark/python/ffi/benchmark_ffi.py
+++ b/benchmark/python/ffi/benchmark_ffi.py
@@ -51,6 +51,9 @@ def generate_workloads():
 def prepare_workloads():
 pool = generate_workloads()
 OpArgMngr.add_workload("zeros", (2, 2))
+OpArgMngr.add_workload("full", (2, 2), 10)
+OpArgMngr.add_workload("identity", 3)
+OpArgMngr.add_workload("ones", (2, 2))
 OpArgMngr.add_workload("einsum", "ii", pool['2x2'], optimize=False)
 OpArgMngr.add_workload("unique", pool['1'], return_index=True, 
return_inverse=True, return_counts=True, axis=-1)
 OpArgMngr.add_workload("dstack", (pool['2x1'], pool['2x1'], pool['2x1'], 
pool['2x1']))
@@ -256,7 +259,7 @@ if __name__ == "__main__":
 import numpy as onp
 from mxnet import np as dnp
 
-mx.n

[incubator-mxnet] branch master updated: New set default dtype (#18251)

2020-05-19 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b904d48  New set default dtype (#18251)
b904d48 is described below

commit b904d4838f6bd6a29171389c6e213ca03ec772b9
Author: JiangZhaoh <54654391+jiangzh...@users.noreply.github.com>
AuthorDate: Wed May 20 07:38:49 2020 +0800

New set default dtype (#18251)

* apply #17283

* fix issue #18060

* fix error

* remove redundant code

* fix CI error

* replace Flase to False

* add 'dtype=False' to set_np()

* fix doc

* default 'arange' default np dtype as int64
---
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 ++
 include/mxnet/imperative.h |  26 ++-
 python/mxnet/__init__.py   |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 147 -
 python/mxnet/ndarray/numpy/random.py   |  33 +--
 python/mxnet/numpy/multiarray.py   | 166 +--
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 +
 python/mxnet/symbol/numpy/random.py|  32 +--
 python/mxnet/symbol/numpy_extension/random.py  |   2 -
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 218 ++-
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  99 -
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 ++
 src/common/utils.h |  14 ++
 src/operator/numpy/linalg/np_gesvd.cc  |   1 +
 src/operator/numpy/np_broadcast_reduce_op.h|   1 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +++-
 src/operator/numpy/np_init_op.cu   |   6 +
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 ++-
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 -
 src/operator/tensor/init_op.h  |  52 +++--
 tests/python/unittest/test_numpy_default_dtype.py  | 230 +
 tests/python/unittest/test_numpy_op.py |  15 +-
 44 files changed, 1060 insertions(+), 279 deletions(-)

diff --git a/benchmark/python/einsum/benchmark_einsum.py 
b/benchmark/python/einsum/benchmark_einsum.py
index 6de8223..3d1a708 100644
--- a/benchmark/python/einsum/benchmark_einsum.py
+++ b/benchmark/python/einsum/benchmark_einsum.py
@@ -83,5 +83,5 @@ def test_np_einsum():
 
 
 if __name__ == "__main__":
-npx.set_np()
+npx.set_np(dtype=False)
 test_np_einsum()
diff --git a/benchmark/python/ffi/benchmark_ffi.py 
b/benchmark/python/ffi/benchmark_ffi.py
index d292377..df4ace2 100644
--- a/benchmark/python/ffi/benchmark_ffi.py
+++ b/benchmark/python/ffi/benchmark_ffi.py
@@ -51,6 +51,9 @@ def generate_workloads():
 def prepare_workloads():
 pool = generate_workloads()
 OpArgMngr.add_workload("zeros", (2, 2))
+OpArgMngr.add_workload("full", (2, 2), 10)
+OpArgMngr.add_workload("identity", 3)
+OpArgMngr.add_workload("ones", (2, 2))
 OpArgMngr.add_workload("einsum", "ii", pool['2x2'], optimize=False)
 OpArgMngr.add_workload("unique", pool['1'], return_index=True, 
return_inverse=True, return_counts=True, axis=-1)
 OpArgMngr.add_workload("dstack", (pool['2x1'], pool['2x1'], pool['2x1'], 
pool['2x1']))
@@ -256,7 +259,7 @@ if __name__ == "__main__":
 import numpy as onp
 from mxnet import np as dnp
 
-mx.n

[incubator-mxnet] branch master updated (5f00c4b -> 53b34cb)

2020-05-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5f00c4b  Disable test_sequential_rnn_cells (#18360)
 add 53b34cb  [Numpy] FFI: max/min/amax/amin (#17824)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   4 +
 python/mxnet/_numpy_op_doc.py  | 250 
 python/mxnet/ndarray/numpy/_op.py  | 256 +++-
 python/mxnet/numpy/multiarray.py   | 260 -
 python/mxnet/symbol/numpy/_symbol.py   | 192 ++-
 python/mxnet/symbol/numpy/linalg.py|   6 +-
 .../operator/numpy/np_broadcast_reduce_op_value.cc | 124 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  16 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   8 +-
 10 files changed, 852 insertions(+), 273 deletions(-)



[incubator-mxnet] branch master updated (5f00c4b -> 53b34cb)

2020-05-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5f00c4b  Disable test_sequential_rnn_cells (#18360)
 add 53b34cb  [Numpy] FFI: max/min/amax/amin (#17824)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   4 +
 python/mxnet/_numpy_op_doc.py  | 250 
 python/mxnet/ndarray/numpy/_op.py  | 256 +++-
 python/mxnet/numpy/multiarray.py   | 260 -
 python/mxnet/symbol/numpy/_symbol.py   | 192 ++-
 python/mxnet/symbol/numpy/linalg.py|   6 +-
 .../operator/numpy/np_broadcast_reduce_op_value.cc | 124 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  16 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   8 +-
 10 files changed, 852 insertions(+), 273 deletions(-)



[incubator-mxnet] branch master updated (5f00c4b -> 53b34cb)

2020-05-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5f00c4b  Disable test_sequential_rnn_cells (#18360)
 add 53b34cb  [Numpy] FFI: max/min/amax/amin (#17824)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   4 +
 python/mxnet/_numpy_op_doc.py  | 250 
 python/mxnet/ndarray/numpy/_op.py  | 256 +++-
 python/mxnet/numpy/multiarray.py   | 260 -
 python/mxnet/symbol/numpy/_symbol.py   | 192 ++-
 python/mxnet/symbol/numpy/linalg.py|   6 +-
 .../operator/numpy/np_broadcast_reduce_op_value.cc | 124 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  16 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   8 +-
 10 files changed, 852 insertions(+), 273 deletions(-)



[incubator-mxnet] branch master updated (5f00c4b -> 53b34cb)

2020-05-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 5f00c4b  Disable test_sequential_rnn_cells (#18360)
 add 53b34cb  [Numpy] FFI: max/min/amax/amin (#17824)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   4 +
 python/mxnet/_numpy_op_doc.py  | 250 
 python/mxnet/ndarray/numpy/_op.py  | 256 +++-
 python/mxnet/numpy/multiarray.py   | 260 -
 python/mxnet/symbol/numpy/_symbol.py   | 192 ++-
 python/mxnet/symbol/numpy/linalg.py|   6 +-
 .../operator/numpy/np_broadcast_reduce_op_value.cc | 124 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  16 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   8 +-
 10 files changed, 852 insertions(+), 273 deletions(-)



[incubator-mxnet] branch master updated: [Numpy] FFI: max/min/amax/amin (#17824)

2020-05-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 53b34cb  [Numpy] FFI: max/min/amax/amin (#17824)
53b34cb is described below

commit 53b34cb63e6cf4ccb32fa6771357ccb97180dc6f
Author: Minghao Liu <40382964+tomm...@users.noreply.github.com>
AuthorDate: Tue May 19 12:29:31 2020 +0800

[Numpy] FFI: max/min/amax/amin (#17824)

* ffi_min_max

* pylint fix

Co-authored-by: Hao Jin 
---
 benchmark/python/ffi/benchmark_ffi.py  |   4 +
 python/mxnet/_numpy_op_doc.py  | 250 
 python/mxnet/ndarray/numpy/_op.py  | 256 +++-
 python/mxnet/numpy/multiarray.py   | 260 -
 python/mxnet/symbol/numpy/_symbol.py   | 192 ++-
 python/mxnet/symbol/numpy/linalg.py|   6 +-
 .../operator/numpy/np_broadcast_reduce_op_value.cc | 124 ++
 src/operator/numpy/np_broadcast_reduce_op.h|   9 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  16 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cu |   8 +-
 10 files changed, 852 insertions(+), 273 deletions(-)

diff --git a/benchmark/python/ffi/benchmark_ffi.py 
b/benchmark/python/ffi/benchmark_ffi.py
index 9daf854..d292377 100644
--- a/benchmark/python/ffi/benchmark_ffi.py
+++ b/benchmark/python/ffi/benchmark_ffi.py
@@ -188,6 +188,10 @@ def prepare_workloads():
 OpArgMngr.add_workload("mean", pool['2x2'], axis=0, keepdims=True)
 OpArgMngr.add_workload("random.gamma", 1, size=(2, 3))
 OpArgMngr.add_workload("random.normal", 1, size=(2, 3))
+OpArgMngr.add_workload("max", pool["2x2"], axis=0, out=pool['2'], 
keepdims=False)
+OpArgMngr.add_workload("min", pool["2x2"], axis=0, out=pool['2'], 
keepdims=False)
+OpArgMngr.add_workload("amax", pool["2x2"], axis=1, out=pool['2'], 
keepdims=False)
+OpArgMngr.add_workload("amin", pool["2x2"], axis=1, out=pool['2'], 
keepdims=False)
 
 unary_ops = ['negative', 'reciprocal', 'abs', 'sign', 'rint', 'ceil', 
'floor',
  'bitwise_not', 'trunc', 'fix', 'square', 'sqrt', 'cbrt', 
'exp',
diff --git a/python/mxnet/_numpy_op_doc.py b/python/mxnet/_numpy_op_doc.py
index 47d7545..198f151 100644
--- a/python/mxnet/_numpy_op_doc.py
+++ b/python/mxnet/_numpy_op_doc.py
@@ -348,256 +348,6 @@ def _np_squeeze(a, axis=None, out=None):
 pass
 
 
-def _np_max(a, axis=None, keepdims=False, out=None):
-"""
-Return the maximum of an array or maximum along an axis.
-
-Parameters
---
-a : ndarray
-Input data.
-axis : int, optional
-Axis along which to operate.  By default, flattened input is used.
-out : ndarray, optional
-Alternative output array in which to place the result.  Must
-be of the same shape and buffer length as the expected output.
-See `doc.ufuncs` (Section "Output arguments") for more details.
-keepdims : bool, optional
-If this is set to True, the axes which are reduced are left
-in the result as dimensions with size one. With this option,
-the result will broadcast correctly against the original `arr`.
-
-Returns
----
-max : ndarray
-Maximum of `a`. If `axis` is None, the result is an array of dimension 
1.
-If `axis` is given, the result is an array of dimension
-``a.ndim - 1``.
-
-See Also
-
-min :
-The minimum value of an array along a given axis, ignoring any nan.
-maximum :
-Element-wise maximum of two arrays, ignoring any nan.
-argmax :
-Return the indices of the maximum values.
-
-Notes
--
-NaN in the orginal `numpy` is denoted as nan and will be ignored.
-
-Don't use `max` for element-wise comparison of 2 arrays; when
-``a.shape[0]`` is 2, ``maximum(a[0], a[1])`` is faster than
-``max(a, axis=0)``.
-
-Examples
-
->>> a = np.arange(4).reshape((2,2))
->>> a
-array([[0., 1.],
-[2., 3.]])
->>> np.max(a)# Maximum of the flattened array
-array(3.)
->>> np.max(a, axis=0)# Maxima along the first axis
-array([2., 3.])
->>> np.max(a, axis=1)# Maxima along the second axis
-array([1., 3.])
-
->>> b = np.arange(5, dtype=np.float32)
->>> b[2] = np.nan
->>> np.max(b)
-array(4.)
-"""
-pass
-
-
-def _np_amax(a, axis=None, keepdims=False, out=None):
-"""
-Return the maximum of an array or maximum along an axis.
-
-Parameters
-   

[incubator-mxnet] branch master updated (18b6e05 -> 0523f09)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18b6e05  Deprecate dataset transform= argument in gluon data API 
(#17852)
 add 0523f09  [Numpy] New FFIs for Operator: squeeze, repeat, around, 
round, diagflat (#18263)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   5 +
 python/mxnet/ndarray/numpy/_op.py  | 143 -
 python/mxnet/numpy/multiarray.py   | 143 -
 python/mxnet/symbol/numpy/_symbol.py   | 142 +++-
 .../operator/numpy/np_elemwise_unary_op_basic.cc   |  24 
 src/api/operator/numpy/np_matrix_op.cc |  62 +
 src/operator/numpy/np_matrix_op-inl.h  |   5 +
 src/operator/numpy/np_matrix_op.cc |   8 +-
 src/operator/numpy/np_matrix_op.cu |   6 +-
 src/operator/tensor/elemwise_unary_op.h|   5 +
 src/operator/tensor/matrix_op-inl.h|  12 ++
 src/operator/tensor/matrix_op.cc   |   2 +-
 12 files changed, 537 insertions(+), 20 deletions(-)



[incubator-mxnet] branch master updated (18b6e05 -> 0523f09)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18b6e05  Deprecate dataset transform= argument in gluon data API 
(#17852)
 add 0523f09  [Numpy] New FFIs for Operator: squeeze, repeat, around, 
round, diagflat (#18263)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   5 +
 python/mxnet/ndarray/numpy/_op.py  | 143 -
 python/mxnet/numpy/multiarray.py   | 143 -
 python/mxnet/symbol/numpy/_symbol.py   | 142 +++-
 .../operator/numpy/np_elemwise_unary_op_basic.cc   |  24 
 src/api/operator/numpy/np_matrix_op.cc |  62 +
 src/operator/numpy/np_matrix_op-inl.h  |   5 +
 src/operator/numpy/np_matrix_op.cc |   8 +-
 src/operator/numpy/np_matrix_op.cu |   6 +-
 src/operator/tensor/elemwise_unary_op.h|   5 +
 src/operator/tensor/matrix_op-inl.h|  12 ++
 src/operator/tensor/matrix_op.cc   |   2 +-
 12 files changed, 537 insertions(+), 20 deletions(-)



[incubator-mxnet] branch master updated (18b6e05 -> 0523f09)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18b6e05  Deprecate dataset transform= argument in gluon data API 
(#17852)
 add 0523f09  [Numpy] New FFIs for Operator: squeeze, repeat, around, 
round, diagflat (#18263)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   5 +
 python/mxnet/ndarray/numpy/_op.py  | 143 -
 python/mxnet/numpy/multiarray.py   | 143 -
 python/mxnet/symbol/numpy/_symbol.py   | 142 +++-
 .../operator/numpy/np_elemwise_unary_op_basic.cc   |  24 
 src/api/operator/numpy/np_matrix_op.cc |  62 +
 src/operator/numpy/np_matrix_op-inl.h  |   5 +
 src/operator/numpy/np_matrix_op.cc |   8 +-
 src/operator/numpy/np_matrix_op.cu |   6 +-
 src/operator/tensor/elemwise_unary_op.h|   5 +
 src/operator/tensor/matrix_op-inl.h|  12 ++
 src/operator/tensor/matrix_op.cc   |   2 +-
 12 files changed, 537 insertions(+), 20 deletions(-)



[incubator-mxnet] branch master updated (18b6e05 -> 0523f09)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18b6e05  Deprecate dataset transform= argument in gluon data API 
(#17852)
 add 0523f09  [Numpy] New FFIs for Operator: squeeze, repeat, around, 
round, diagflat (#18263)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   5 +
 python/mxnet/ndarray/numpy/_op.py  | 143 -
 python/mxnet/numpy/multiarray.py   | 143 -
 python/mxnet/symbol/numpy/_symbol.py   | 142 +++-
 .../operator/numpy/np_elemwise_unary_op_basic.cc   |  24 
 src/api/operator/numpy/np_matrix_op.cc |  62 +
 src/operator/numpy/np_matrix_op-inl.h  |   5 +
 src/operator/numpy/np_matrix_op.cc |   8 +-
 src/operator/numpy/np_matrix_op.cu |   6 +-
 src/operator/tensor/elemwise_unary_op.h|   5 +
 src/operator/tensor/matrix_op-inl.h|  12 ++
 src/operator/tensor/matrix_op.cc   |   2 +-
 12 files changed, 537 insertions(+), 20 deletions(-)



[incubator-mxnet] branch master updated (18b6e05 -> 0523f09)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 18b6e05  Deprecate dataset transform= argument in gluon data API 
(#17852)
 add 0523f09  [Numpy] New FFIs for Operator: squeeze, repeat, around, 
round, diagflat (#18263)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   5 +
 python/mxnet/ndarray/numpy/_op.py  | 143 -
 python/mxnet/numpy/multiarray.py   | 143 -
 python/mxnet/symbol/numpy/_symbol.py   | 142 +++-
 .../operator/numpy/np_elemwise_unary_op_basic.cc   |  24 
 src/api/operator/numpy/np_matrix_op.cc |  62 +
 src/operator/numpy/np_matrix_op-inl.h  |   5 +
 src/operator/numpy/np_matrix_op.cc |   8 +-
 src/operator/numpy/np_matrix_op.cu |   6 +-
 src/operator/tensor/elemwise_unary_op.h|   5 +
 src/operator/tensor/matrix_op-inl.h|  12 ++
 src/operator/tensor/matrix_op.cc   |   2 +-
 12 files changed, 537 insertions(+), 20 deletions(-)



[incubator-mxnet] branch master updated: [Numpy] Port nd.random.multinomial to npx.sample_categorical (#18272)

2020-05-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 9d44086  [Numpy] Port nd.random.multinomial to npx.sample_categorical 
(#18272)
9d44086 is described below

commit 9d440868603ad26b702e12ddd2587e5c4b56e42b
Author: Xi Wang 
AuthorDate: Tue May 12 04:57:56 2020 +0800

[Numpy] Port nd.random.multinomial to npx.sample_categorical (#18272)

* port nd.multinomial to npx.sample_categorical

* move to npx.random
---
 src/operator/random/sample_multinomial_op.cc |  1 +
 tests/python/unittest/test_numpy_op.py   | 27 +++
 2 files changed, 28 insertions(+)

diff --git a/src/operator/random/sample_multinomial_op.cc 
b/src/operator/random/sample_multinomial_op.cc
index bba76ce..f0aa246 100644
--- a/src/operator/random/sample_multinomial_op.cc
+++ b/src/operator/random/sample_multinomial_op.cc
@@ -32,6 +32,7 @@ DMLC_REGISTER_PARAMETER(SampleMultinomialParam);
 
 NNVM_REGISTER_OP(_sample_multinomial)
 .add_alias("sample_multinomial")
+.add_alias("_npx__random_categorical")
 .describe(R"code(Concurrent sampling from multiple multinomial distributions.
 
 *data* is an *n* dimensional array whose last dimension has length *k*, where
diff --git a/tests/python/unittest/test_numpy_op.py 
b/tests/python/unittest/test_numpy_op.py
index bb07a57..3472481 100644
--- a/tests/python/unittest/test_numpy_op.py
+++ b/tests/python/unittest/test_numpy_op.py
@@ -4545,6 +4545,33 @@ def test_np_multivariate_normal():
 
 @with_seed()
 @use_np
+def test_npx_categorical():
+class TestNumpyCategorical(HybridBlock):
+def __init__(self, size=None):
+super(TestNumpyCategorical, self).__init__()
+self.size = size
+
+def hybrid_forward(self, F, prob):
+if self.size is None:
+return F.npx.random.categorical(prob)
+return F.npx.random.categorical(prob, shape=self.size)
+
+batch_sizes = [(2,), (2, 3)]
+event_shapes = [None, (10,), (10, 12)]
+num_event = [2, 4, 10]
+for batch_size, num_event, event_shape in itertools.product(batch_sizes, 
num_event, event_shapes):
+for hybridize in [True, False]:
+prob = np.ones(batch_size + (num_event,)) / num_event
+net = TestNumpyCategorical(event_shape)
+if hybridize:
+net.hybridize()
+mx_out = net(prob)
+desired_shape = batch_size + event_shape if event_shape is not 
None else batch_size
+assert mx_out.shape == desired_shape
+
+
+@with_seed()
+@use_np
 def test_random_seed():
 for seed in [234, 594, 7240, 20394]:
 ret = []



[incubator-mxnet] branch master updated (33dfbf7 -> f00b9ab)

2020-05-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 33dfbf7  fix when clicking version dropdown it jumps to top of the 
page (#18238)
 add f00b9ab  fix mixed type backward (#18250)

No new revisions were added by this update.

Summary of changes:
 src/operator/mshadow_op.h  | 52 +
 src/operator/numpy/np_elemwise_broadcast_op.cc | 64 --
 src/operator/numpy/np_elemwise_broadcast_op.cu | 23 -
 src/operator/numpy/np_true_divide-inl.h|  1 +
 src/operator/numpy/np_true_divide.cc   | 18 +++-
 src/operator/numpy/np_true_divide.cu   |  4 ++
 src/operator/operator_tune.cc  |  2 +
 tests/python/unittest/test_numpy_op.py |  9 ++--
 8 files changed, 163 insertions(+), 10 deletions(-)



[incubator-mxnet] branch master updated (33dfbf7 -> f00b9ab)

2020-05-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 33dfbf7  fix when clicking version dropdown it jumps to top of the 
page (#18238)
 add f00b9ab  fix mixed type backward (#18250)

No new revisions were added by this update.

Summary of changes:
 src/operator/mshadow_op.h  | 52 +
 src/operator/numpy/np_elemwise_broadcast_op.cc | 64 --
 src/operator/numpy/np_elemwise_broadcast_op.cu | 23 -
 src/operator/numpy/np_true_divide-inl.h|  1 +
 src/operator/numpy/np_true_divide.cc   | 18 +++-
 src/operator/numpy/np_true_divide.cu   |  4 ++
 src/operator/operator_tune.cc  |  2 +
 tests/python/unittest/test_numpy_op.py |  9 ++--
 8 files changed, 163 insertions(+), 10 deletions(-)



[incubator-mxnet] branch master updated (3801f97 -> 33436ac)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3801f97  [CI] fix debug build (#18240)
 add 33436ac  [numpy] add op tri (#17846)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 149 -
 python/mxnet/numpy/fallback.py |   2 -
 python/mxnet/numpy/multiarray.py   | 122 +
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 .../operator/numpy/{np_tril_op.cc => np_tri_op.cc} |  30 +++--
 src/operator/numpy/np_tri_op-inl.h | 124 +
 src/operator/numpy/np_tri_op.cc|  69 ++
 .../numpy/{np_memory_op.cu => np_tri_op.cu}|   9 +-
 tests/python/unittest/test_numpy_op.py |  34 +
 10 files changed, 556 insertions(+), 20 deletions(-)
 copy src/api/operator/numpy/{np_tril_op.cc => np_tri_op.cc} (62%)
 create mode 100644 src/operator/numpy/np_tri_op-inl.h
 create mode 100644 src/operator/numpy/np_tri_op.cc
 copy src/operator/numpy/{np_memory_op.cu => np_tri_op.cu} (83%)



[incubator-mxnet] branch master updated (3801f97 -> 33436ac)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3801f97  [CI] fix debug build (#18240)
 add 33436ac  [numpy] add op tri (#17846)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 149 -
 python/mxnet/numpy/fallback.py |   2 -
 python/mxnet/numpy/multiarray.py   | 122 +
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 .../operator/numpy/{np_tril_op.cc => np_tri_op.cc} |  30 +++--
 src/operator/numpy/np_tri_op-inl.h | 124 +
 src/operator/numpy/np_tri_op.cc|  69 ++
 .../numpy/{np_memory_op.cu => np_tri_op.cu}|   9 +-
 tests/python/unittest/test_numpy_op.py |  34 +
 10 files changed, 556 insertions(+), 20 deletions(-)
 copy src/api/operator/numpy/{np_tril_op.cc => np_tri_op.cc} (62%)
 create mode 100644 src/operator/numpy/np_tri_op-inl.h
 create mode 100644 src/operator/numpy/np_tri_op.cc
 copy src/operator/numpy/{np_memory_op.cu => np_tri_op.cu} (83%)



[incubator-mxnet] branch master updated (3801f97 -> 33436ac)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3801f97  [CI] fix debug build (#18240)
 add 33436ac  [numpy] add op tri (#17846)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 149 -
 python/mxnet/numpy/fallback.py |   2 -
 python/mxnet/numpy/multiarray.py   | 122 +
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 .../operator/numpy/{np_tril_op.cc => np_tri_op.cc} |  30 +++--
 src/operator/numpy/np_tri_op-inl.h | 124 +
 src/operator/numpy/np_tri_op.cc|  69 ++
 .../numpy/{np_memory_op.cu => np_tri_op.cu}|   9 +-
 tests/python/unittest/test_numpy_op.py |  34 +
 10 files changed, 556 insertions(+), 20 deletions(-)
 copy src/api/operator/numpy/{np_tril_op.cc => np_tri_op.cc} (62%)
 create mode 100644 src/operator/numpy/np_tri_op-inl.h
 create mode 100644 src/operator/numpy/np_tri_op.cc
 copy src/operator/numpy/{np_memory_op.cu => np_tri_op.cu} (83%)



[incubator-mxnet] branch master updated (3801f97 -> 33436ac)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3801f97  [CI] fix debug build (#18240)
 add 33436ac  [numpy] add op tri (#17846)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 149 -
 python/mxnet/numpy/fallback.py |   2 -
 python/mxnet/numpy/multiarray.py   | 122 +
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 .../operator/numpy/{np_tril_op.cc => np_tri_op.cc} |  30 +++--
 src/operator/numpy/np_tri_op-inl.h | 124 +
 src/operator/numpy/np_tri_op.cc|  69 ++
 .../numpy/{np_memory_op.cu => np_tri_op.cu}|   9 +-
 tests/python/unittest/test_numpy_op.py |  34 +
 10 files changed, 556 insertions(+), 20 deletions(-)
 copy src/api/operator/numpy/{np_tril_op.cc => np_tri_op.cc} (62%)
 create mode 100644 src/operator/numpy/np_tri_op-inl.h
 create mode 100644 src/operator/numpy/np_tri_op.cc
 copy src/operator/numpy/{np_memory_op.cu => np_tri_op.cu} (83%)



[incubator-mxnet] branch master updated (3801f97 -> 33436ac)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3801f97  [CI] fix debug build (#18240)
 add 33436ac  [numpy] add op tri (#17846)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/ffi/benchmark_ffi.py  |   1 +
 python/mxnet/ndarray/numpy/_op.py  | 149 -
 python/mxnet/numpy/fallback.py |   2 -
 python/mxnet/numpy/multiarray.py   | 122 +
 python/mxnet/symbol/numpy/_symbol.py   |  36 -
 .../operator/numpy/{np_tril_op.cc => np_tri_op.cc} |  30 +++--
 src/operator/numpy/np_tri_op-inl.h | 124 +
 src/operator/numpy/np_tri_op.cc|  69 ++
 .../numpy/{np_memory_op.cu => np_tri_op.cu}|   9 +-
 tests/python/unittest/test_numpy_op.py |  34 +
 10 files changed, 556 insertions(+), 20 deletions(-)
 copy src/api/operator/numpy/{np_tril_op.cc => np_tri_op.cc} (62%)
 create mode 100644 src/operator/numpy/np_tri_op-inl.h
 create mode 100644 src/operator/numpy/np_tri_op.cc
 copy src/operator/numpy/{np_memory_op.cu => np_tri_op.cu} (83%)



[incubator-mxnet] branch master updated: [numpy] fix np.random.normal/gumbel/logistic ffi (#18247)

2020-05-06 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c415ce6  [numpy] fix np.random.normal/gumbel/logistic ffi (#18247)
c415ce6 is described below

commit c415ce60fcd9ee55226327f8fbb8513156852a8a
Author: Yiyan66 <57363390+yiya...@users.noreply.github.com>
AuthorDate: Wed May 6 23:51:40 2020 +0800

[numpy] fix np.random.normal/gumbel/logistic ffi (#18247)

* issue 18242

* add test

Co-authored-by: Ubuntu 
---
 .../operator/numpy/random/np_location_scale_op.cc  | 39 +++---
 src/api/operator/random/np_normal_op.cc| 10 +++---
 tests/python/unittest/test_numpy_op.py |  9 +
 3 files changed, 35 insertions(+), 23 deletions(-)

diff --git a/src/api/operator/numpy/random/np_location_scale_op.cc 
b/src/api/operator/numpy/random/np_location_scale_op.cc
index d4702fc..d163b0b 100644
--- a/src/api/operator/numpy/random/np_location_scale_op.cc
+++ b/src/api/operator/numpy/random/np_location_scale_op.cc
@@ -23,6 +23,7 @@
  */
 #include 
 #include 
+#include 
 #include "../../utils.h"
 #include "../../../../operator/numpy/random/np_location_scale_op.h"
 
@@ -47,7 +48,7 @@ MXNET_REGISTER_API("_npi.gumbel")
   if (args[2].type_code() == kDLInt) {
   param.size = Tuple(1, args[2].operator int64_t());
   } else if (args[2].type_code() == kNull) {
-  param.size = Tuple({1});
+  param.size = dmlc::optional>();
   } else {
   param.size = Tuple(args[2].operator ObjectRef());
   }
@@ -58,7 +59,7 @@ MXNET_REGISTER_API("_npi.gumbel")
   NDArray** outputs = out == nullptr ? nullptr : 
   int num_outputs = out != nullptr;
   int scalar = scalar_number(args);
-  NDArray* inputs[2];
+  std::vector inputs;
   int num_inputs = 0;
   if (scalar == 2) {
 param.loc = args[0].operator double();
@@ -66,24 +67,24 @@ MXNET_REGISTER_API("_npi.gumbel")
   } else if (scalar == 0) {
 param.loc = dmlc::nullopt;
 param.scale = dmlc::nullopt;
-inputs[0] = args[0].operator mxnet::NDArray*();
-inputs[1] = args[1].operator mxnet::NDArray*();
+inputs.push_back(args[0].operator mxnet::NDArray*());
+inputs.push_back(args[1].operator mxnet::NDArray*());
 num_inputs = 2;
   } else {
 if (args[0].type_code() == kDLFloat || args[0].type_code() == kDLInt) {
-  param.loc = dmlc::nullopt;
-  param.scale = args[1].operator double();
-  inputs[0] = args[0].operator mxnet::NDArray*();
-} else {
   param.loc = args[0].operator double();
   param.scale = dmlc::nullopt;
-  inputs[0] = args[1].operator mxnet::NDArray*();
+  inputs.push_back(args[1].operator mxnet::NDArray*());
+} else {
+  param.loc = dmlc::nullopt;
+  param.scale = args[1].operator double();
+  inputs.push_back(args[0].operator mxnet::NDArray*());
 }
 num_inputs = 1;
   }
   attrs.parsed = std::move(param);
   SetAttrDict();
-  auto ndoutputs = Invoke(op, , num_inputs, inputs,
+  auto ndoutputs = Invoke(op, , num_inputs, inputs.data(),
   _outputs, outputs);
   if (out) {
 *ret = PythonArg(4);
@@ -113,7 +114,7 @@ MXNET_REGISTER_API("_npi.logistic")
   NDArray** outputs = out == nullptr ? nullptr : 
   int num_outputs = out != nullptr;
   int scalar = scalar_number(args);
-  NDArray* inputs[2];
+  std::vector inputs;
   int num_inputs = 0;
   if (scalar == 2) {
 param.loc = args[0].operator double();
@@ -121,24 +122,24 @@ MXNET_REGISTER_API("_npi.logistic")
   } else if (scalar == 0) {
 param.loc = dmlc::nullopt;
 param.scale = dmlc::nullopt;
-inputs[0] = args[0].operator mxnet::NDArray*();
-inputs[1] = args[1].operator mxnet::NDArray*();
+inputs.push_back(args[0].operator mxnet::NDArray*());
+inputs.push_back(args[1].operator mxnet::NDArray*());
 num_inputs = 2;
   } else {
 if (args[0].type_code() == kDLFloat || args[0].type_code() == kDLInt) {
-  param.loc = dmlc::nullopt;
-  param.scale = args[1].operator double();
-  inputs[0] = args[0].operator mxnet::NDArray*();
-} else {
   param.loc = args[0].operator double();
   param.scale = dmlc::nullopt;
-  inputs[0] = args[1].operator mxnet::NDArray*();
+  inputs.push_back(args[1].operator mxnet::NDArray*());
+} else {
+  param.loc = dmlc::nullopt;
+  param.scale = args[1].operator double();
+  inputs.push_back(args[0].operator mxnet::NDArray*());
 }
 num_inputs = 1;
   }
   attrs.parsed = std::move(param);
   SetAttrDict();
-  auto ndoutputs = Invoke(op, , num_inputs, inputs,
+  auto ndoutputs = Invoke(op, , num_inputs, inputs.data(),
   _outputs, outputs);
   if (out) {
 *ret = PythonArg(4);
diff --git a/src/api/operator/random/np_normal_op.cc 
b/

[incubator-mxnet] branch master updated (df28e61 -> 5c525c9)

2020-04-28 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df28e61  Fixed Install page history broken (#18182)
 add 5c525c9  [NumPy]Set numpy default dtype (#17283)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 ++
 include/mxnet/imperative.h |  26 ++-
 python/mxnet/__init__.py   |   1 +
 python/mxnet/gluon/data/dataloader.py  |   4 +-
 python/mxnet/ndarray/numpy/_op.py  | 147 +-
 python/mxnet/ndarray/numpy/random.py   |  33 +--
 python/mxnet/numpy/multiarray.py   | 159 +--
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 ++
 python/mxnet/symbol/numpy/random.py|  32 +--
 python/mxnet/symbol/numpy_extension/random.py  |   2 -
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 217 +++-
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  97 -
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 ++
 src/common/utils.h |  14 ++
 src/operator/numpy/linalg/np_gesvd.cc  |   1 +
 src/operator/numpy/np_broadcast_reduce_op.h|   1 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +++-
 src/operator/numpy/np_init_op.cu   |   6 +
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 ++-
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 -
 src/operator/tensor/init_op.h  |  52 +++--
 tests/python/unittest/test_numpy_default_dtype.py  | 225 +
 tests/python/unittest/test_numpy_op.py |   9 +-
 45 files changed, 1045 insertions(+), 277 deletions(-)
 create mode 100644 tests/python/unittest/test_numpy_default_dtype.py



[incubator-mxnet] branch master updated (df28e61 -> 5c525c9)

2020-04-28 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df28e61  Fixed Install page history broken (#18182)
 add 5c525c9  [NumPy]Set numpy default dtype (#17283)

No new revisions were added by this update.

Summary of changes:
 benchmark/python/einsum/benchmark_einsum.py|   2 +-
 benchmark/python/ffi/benchmark_ffi.py  |   5 +-
 include/mxnet/c_api.h  |  14 ++
 include/mxnet/imperative.h |  26 ++-
 python/mxnet/__init__.py   |   1 +
 python/mxnet/gluon/data/dataloader.py  |   4 +-
 python/mxnet/ndarray/numpy/_op.py  | 147 +-
 python/mxnet/ndarray/numpy/random.py   |  33 +--
 python/mxnet/numpy/multiarray.py   | 159 +--
 python/mxnet/numpy/random.py   |   8 +-
 python/mxnet/numpy_extension/__init__.py   |   3 +-
 python/mxnet/symbol/numpy/_symbol.py   | 103 ++
 python/mxnet/symbol/numpy/random.py|  32 +--
 python/mxnet/symbol/numpy_extension/random.py  |   2 -
 python/mxnet/test_utils.py |   2 +-
 python/mxnet/util.py   | 217 +++-
 .../operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/api/operator/numpy/np_init_op.cc   |  97 -
 src/api/operator/numpy/np_window_op.cc |   3 +-
 src/api/operator/random/np_gamma_op.cc |   2 +-
 src/api/operator/random/np_normal_op.cc|   2 +-
 src/api/operator/random/np_uniform_op.cc   |   2 +-
 src/c_api/c_api_ndarray.cc |  12 ++
 src/common/utils.h |  14 ++
 src/operator/numpy/linalg/np_gesvd.cc  |   1 +
 src/operator/numpy/np_broadcast_reduce_op.h|   1 +
 src/operator/numpy/np_broadcast_reduce_op_value.cc |   2 +-
 src/operator/numpy/np_init_op.cc   |  44 +++-
 src/operator/numpy/np_init_op.cu   |   6 +
 src/operator/numpy/np_init_op.h|   9 +-
 src/operator/numpy/np_true_divide-inl.h|  24 ++-
 src/operator/numpy/np_true_divide.cc   |   7 +-
 src/operator/numpy/np_window_op.cc |   6 +-
 src/operator/numpy/np_window_op.h  |   3 +-
 src/operator/numpy/random/np_bernoulli_op.h|   8 +-
 src/operator/numpy/random/np_gamma_op.cc   |   2 +-
 src/operator/numpy/random/np_gamma_op.h|   8 +-
 src/operator/numpy/random/np_laplace_op.h  |   2 +-
 src/operator/numpy/random/np_normal_op.h   |   8 +-
 src/operator/numpy/random/np_uniform_op.h  |   8 +-
 src/operator/random/sample_op.h|   3 +-
 src/operator/tensor/init_op.cc |   2 -
 src/operator/tensor/init_op.h  |  52 +++--
 tests/python/unittest/test_numpy_default_dtype.py  | 225 +
 tests/python/unittest/test_numpy_op.py |   9 +-
 45 files changed, 1045 insertions(+), 277 deletions(-)
 create mode 100644 tests/python/unittest/test_numpy_default_dtype.py



[incubator-mxnet] branch master updated (3a76ab6 -> 998c6ad)

2020-04-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3a76ab6  Enable docker cache build for images listed in 
docker-compose.yml (#18179)
 add 998c6ad  [numpy] Fix core dump for tril, triu  (#18157)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/runtime/packed_func.h  | 1 +
 python/mxnet/numpy_dispatch_protocol.py  | 1 +
 tests/python/unittest/test_numpy_interoperability.py | 4 
 tests/python/unittest/test_numpy_op.py   | 1 -
 4 files changed, 6 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (3a76ab6 -> 998c6ad)

2020-04-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3a76ab6  Enable docker cache build for images listed in 
docker-compose.yml (#18179)
 add 998c6ad  [numpy] Fix core dump for tril, triu  (#18157)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/runtime/packed_func.h  | 1 +
 python/mxnet/numpy_dispatch_protocol.py  | 1 +
 tests/python/unittest/test_numpy_interoperability.py | 4 
 tests/python/unittest/test_numpy_op.py   | 1 -
 4 files changed, 6 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (3a76ab6 -> 998c6ad)

2020-04-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3a76ab6  Enable docker cache build for images listed in 
docker-compose.yml (#18179)
 add 998c6ad  [numpy] Fix core dump for tril, triu  (#18157)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/runtime/packed_func.h  | 1 +
 python/mxnet/numpy_dispatch_protocol.py  | 1 +
 tests/python/unittest/test_numpy_interoperability.py | 4 
 tests/python/unittest/test_numpy_op.py   | 1 -
 4 files changed, 6 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (6972b98 -> 440a44a)

2020-04-24 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6972b98  add bnrelu bf16 into amp list (#18155)
 add 440a44a  add: numpy op fill_diagonal (#18049)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/ndarray/numpy/_op.py  |  95 ++-
 python/mxnet/numpy/multiarray.py   |  90 ++-
 ...{np_nan_to_num_op.cc => np_fill_diagonal_op.cc} |  37 ++---
 src/operator/numpy/np_fill_diagonal_op-inl.h   | 175 +
 .../{np_triu_op.cc => np_fill_diagonal_op.cc}  |  43 +++--
 .../{np_interp_op.cu => np_fill_diagonal_op.cu}|  13 +-
 tests/python/unittest/test_numpy_op.py |  45 ++
 7 files changed, 442 insertions(+), 56 deletions(-)
 copy src/api/operator/numpy/{np_nan_to_num_op.cc => np_fill_diagonal_op.cc} 
(65%)
 create mode 100644 src/operator/numpy/np_fill_diagonal_op-inl.h
 copy src/operator/numpy/{np_triu_op.cc => np_fill_diagonal_op.cc} (53%)
 copy src/operator/numpy/{np_interp_op.cu => np_fill_diagonal_op.cu} (79%)



[incubator-mxnet] branch master updated (c3c76a8 -> 002d4f1)

2020-04-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c3c76a8  Optimize AddTakeGrad Tensor Sum (#17906)
 add 002d4f1  * impl - FFi for linalg op (#17795)

No new revisions were added by this update.

Summary of changes:
 Makefile   |  4 +--
 benchmark/python/ffi/benchmark_ffi.py  |  8 +
 python/mxnet/ndarray/numpy/linalg.py   | 18 +-
 python/mxnet/symbol/numpy/linalg.py|  2 +-
 .../{np_nonzero_op.cc => linalg/np_eigvals.cc} | 30 
 .../numpy/{np_nonzero_op.cc => linalg/np_inv.cc}   | 14 
 .../numpy/{np_memory_op.cc => linalg/np_pinv.cc}   | 42 ++
 .../numpy/{np_nonzero_op.cc => linalg/np_potrf.cc} | 19 +-
 .../numpy/{np_memory_op.cc => linalg/np_solve.cc}  | 10 +++---
 .../{np_nonzero_op.cc => linalg/np_tensorinv.cc}   | 19 +-
 .../{np_memory_op.cc => linalg/np_tensorsolve.cc}  | 23 +---
 src/api/operator/ufunc_helper.cc   |  1 +
 src/api/operator/utils.cc  |  5 +++
 src/api/operator/utils.h   |  5 +--
 src/operator/numpy/linalg/np_eigvals-inl.h |  6 
 src/operator/numpy/linalg/np_pinv-inl.h| 14 
 src/operator/numpy/linalg/np_potrf.cc  |  3 +-
 src/operator/numpy/linalg/np_tensorinv-inl.h   |  6 
 src/operator/numpy/linalg/np_tensorsolve-inl.h |  6 
 src/operator/tensor/la_op.h|  6 
 tests/python/unittest/test_numpy_op.py |  1 -
 21 files changed, 178 insertions(+), 64 deletions(-)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_eigvals.cc} (59%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_inv.cc} (87%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_pinv.cc} (53%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_potrf.cc} (76%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_solve.cc} (89%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_tensorinv.cc} (75%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_tensorsolve.cc} (69%)



[incubator-mxnet] branch master updated (c3c76a8 -> 002d4f1)

2020-04-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c3c76a8  Optimize AddTakeGrad Tensor Sum (#17906)
 add 002d4f1  * impl - FFi for linalg op (#17795)

No new revisions were added by this update.

Summary of changes:
 Makefile   |  4 +--
 benchmark/python/ffi/benchmark_ffi.py  |  8 +
 python/mxnet/ndarray/numpy/linalg.py   | 18 +-
 python/mxnet/symbol/numpy/linalg.py|  2 +-
 .../{np_nonzero_op.cc => linalg/np_eigvals.cc} | 30 
 .../numpy/{np_nonzero_op.cc => linalg/np_inv.cc}   | 14 
 .../numpy/{np_memory_op.cc => linalg/np_pinv.cc}   | 42 ++
 .../numpy/{np_nonzero_op.cc => linalg/np_potrf.cc} | 19 +-
 .../numpy/{np_memory_op.cc => linalg/np_solve.cc}  | 10 +++---
 .../{np_nonzero_op.cc => linalg/np_tensorinv.cc}   | 19 +-
 .../{np_memory_op.cc => linalg/np_tensorsolve.cc}  | 23 +---
 src/api/operator/ufunc_helper.cc   |  1 +
 src/api/operator/utils.cc  |  5 +++
 src/api/operator/utils.h   |  5 +--
 src/operator/numpy/linalg/np_eigvals-inl.h |  6 
 src/operator/numpy/linalg/np_pinv-inl.h| 14 
 src/operator/numpy/linalg/np_potrf.cc  |  3 +-
 src/operator/numpy/linalg/np_tensorinv-inl.h   |  6 
 src/operator/numpy/linalg/np_tensorsolve-inl.h |  6 
 src/operator/tensor/la_op.h|  6 
 tests/python/unittest/test_numpy_op.py |  1 -
 21 files changed, 178 insertions(+), 64 deletions(-)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_eigvals.cc} (59%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_inv.cc} (87%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_pinv.cc} (53%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_potrf.cc} (76%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_solve.cc} (89%)
 copy src/api/operator/numpy/{np_nonzero_op.cc => linalg/np_tensorinv.cc} (75%)
 copy src/api/operator/numpy/{np_memory_op.cc => linalg/np_tensorsolve.cc} (69%)



[incubator-mxnet] branch master updated (ac567c8 -> 6ebe720)

2020-02-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ac567c8  cmake: handle mshadow as INTERFACE target (#17396)
 add 6ebe720  [Scala/Java] Remove unnecessary data slicing (#17544)

No new revisions were added by this update.

Summary of changes:
 .../main/scala/org/apache/mxnet/module/DataParallelExecutorGroup.scala  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[incubator-mxnet] branch master updated (8d820cf -> 4054355)

2020-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8d820cf  Fix abort() calls on CHECK macro failures (#17509)
 add 4054355  [tvmop] use size_var for placeholder & compute to reduce size 
of generated code (#17519)

No new revisions were added by this update.

Summary of changes:
 contrib/tvmop/basic/ufunc.py  | 16 
 contrib/tvmop/core/fromnumeric.py |  2 +-
 contrib/tvmop/core/multiarray.py  |  6 +++---
 contrib/tvmop/core/umath.py   | 10 +-
 contrib/tvmop/utils.py|  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)



[incubator-mxnet] branch master updated (8d820cf -> 4054355)

2020-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8d820cf  Fix abort() calls on CHECK macro failures (#17509)
 add 4054355  [tvmop] use size_var for placeholder & compute to reduce size 
of generated code (#17519)

No new revisions were added by this update.

Summary of changes:
 contrib/tvmop/basic/ufunc.py  | 16 
 contrib/tvmop/core/fromnumeric.py |  2 +-
 contrib/tvmop/core/multiarray.py  |  6 +++---
 contrib/tvmop/core/umath.py   | 10 +-
 contrib/tvmop/utils.py|  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)



[incubator-mxnet] branch master updated (8d820cf -> 4054355)

2020-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8d820cf  Fix abort() calls on CHECK macro failures (#17509)
 add 4054355  [tvmop] use size_var for placeholder & compute to reduce size 
of generated code (#17519)

No new revisions were added by this update.

Summary of changes:
 contrib/tvmop/basic/ufunc.py  | 16 
 contrib/tvmop/core/fromnumeric.py |  2 +-
 contrib/tvmop/core/multiarray.py  |  6 +++---
 contrib/tvmop/core/umath.py   | 10 +-
 contrib/tvmop/utils.py|  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)



[incubator-mxnet] branch master updated (8d820cf -> 4054355)

2020-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8d820cf  Fix abort() calls on CHECK macro failures (#17509)
 add 4054355  [tvmop] use size_var for placeholder & compute to reduce size 
of generated code (#17519)

No new revisions were added by this update.

Summary of changes:
 contrib/tvmop/basic/ufunc.py  | 16 
 contrib/tvmop/core/fromnumeric.py |  2 +-
 contrib/tvmop/core/multiarray.py  |  6 +++---
 contrib/tvmop/core/umath.py   | 10 +-
 contrib/tvmop/utils.py|  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)



[incubator-mxnet] branch master updated (b3bbbbe -> eceb5f2)

2020-02-03 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b3e  Docs: Python tutorials doc fixes (#17435)
 add eceb5f2  [submodule] Update tvm to the latest (#17438)

No new revisions were added by this update.

Summary of changes:
 3rdparty/tvm   |  2 +-
 .../pages/api/cpp/docs/tutorials/subgraphAPI.md|  4 +-
 docs/static_site/src/pages/api/faq/new_op.md   |  4 +-
 include/mxnet/imperative.h | 10 +--
 include/mxnet/op_attr_types.h  |  2 +-
 src/c_api/c_api.cc |  2 +-
 src/c_api/c_api_function.cc|  4 +-
 src/c_api/c_api_symbolic.cc| 10 +--
 src/common/exec_utils.cc   | 10 +--
 src/executor/eliminate_common_expr_pass.cc | 28 
 src/executor/exec_pass.h   |  4 +-
 src/executor/graph_executor.cc | 16 ++---
 src/executor/infer_graph_attr_pass.cc  |  8 +--
 src/executor/pointwise_fusion_pass.cc  | 18 ++---
 src/executor/simple_partition_pass.h   | 14 ++--
 src/imperative/cached_op.cc|  4 +-
 src/imperative/cached_op.h |  6 +-
 src/imperative/imperative.cc   | 14 ++--
 src/nnvm/amp_infer_unknown.cc  | 12 ++--
 src/nnvm/gradient.cc   | 20 +++---
 src/nnvm/graph_editor.cc   |  6 +-
 src/nnvm/legacy_json_util.cc   |  2 +-
 src/nnvm/legacy_op_util.cc |  6 +-
 src/nnvm/low_precision_pass.cc | 42 ++--
 src/nnvm/node_op_util.h|  4 +-
 src/nnvm/plan_memory.cc|  2 -
 src/nnvm/tvm_bridge.cc |  4 +-
 src/operator/batch_norm_v1.cc  |  2 +-
 src/operator/contrib/amp_graph_pass.cc |  4 +-
 src/operator/contrib/roi_align.cc  |  2 +-
 src/operator/contrib/sync_batch_norm.cc|  2 +-
 src/operator/control_flow.cc   |  6 +-
 src/operator/custom/custom.cc  |  6 +-
 src/operator/elemwise_op_common.h  | 10 +--
 src/operator/fusion/fused_op.cc| 16 ++---
 src/operator/fusion/fused_op.h | 12 ++--
 src/operator/identity_attach_KL_sparse_reg.cc  |  2 +-
 src/operator/leaky_relu.cc |  2 +-
 src/operator/nn/activation.cc  |  2 +-
 src/operator/nn/batch_norm.cc  |  8 +--
 src/operator/nn/concat.cc  |  2 +-
 src/operator/nn/convolution.cc |  2 +-
 src/operator/nn/cudnn/cudnn_batch_norm.cc  |  2 +-
 src/operator/nn/deconvolution.cc   |  2 +-
 src/operator/nn/dropout.cc |  2 +-
 src/operator/nn/fully_connected.cc |  4 +-
 src/operator/nn/group_norm.cc  |  2 +-
 src/operator/nn/layer_norm.cc  |  2 +-
 src/operator/nn/lrn.cc |  2 +-
 src/operator/nn/softmax-inl.h  |  2 +-
 src/operator/nn/upsampling.cc  |  4 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  2 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  6 +-
 src/operator/numpy/np_matrix_op.cc | 10 +--
 src/operator/numpy/np_where_op.cc  |  2 +-
 src/operator/operator_common.h | 16 ++---
 src/operator/quantization/quantize_graph_pass.cc   | 76 +++---
 src/operator/quantization/quantized_activation.cc  |  2 +-
 src/operator/quantization/quantized_batch_norm.cc  |  2 +-
 src/operator/quantization/quantized_concat.cc  |  2 +-
 src/operator/quantization/quantized_conv.cc|  2 +-
 .../quantization/quantized_elemwise_add.cc |  2 +-
 .../quantization/quantized_elemwise_mul.cc |  2 +-
 src/operator/quantization/quantized_flatten.cc |  2 +-
 .../quantization/quantized_fully_connected.cc  |  2 +-
 src/operator/quantization/quantized_indexing_op.cc |  2 +-
 src/operator/quantization/quantized_pooling.cc |  2 +-
 src/operator/random/sample_multinomial_op.cc   |  2 +-
 src/operator/random/sample_op.h| 62 +-
 src/operator/regression_output-inl.h   |  2 +-
 src/operator/rnn.cc|  2 +-
 src/operator/softmax_output.cc |  4 +-
 src/operator/subgraph/build_subgraph.cc|  9 +--
 src/operator/subgraph/common.h |  2 +-
 src/operator/subgraph/default_subgraph_property.cc |  4 +-
 .../subgraph

[incubator-mxnet] branch master updated (b3bbbbe -> eceb5f2)

2020-02-03 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b3e  Docs: Python tutorials doc fixes (#17435)
 add eceb5f2  [submodule] Update tvm to the latest (#17438)

No new revisions were added by this update.

Summary of changes:
 3rdparty/tvm   |  2 +-
 .../pages/api/cpp/docs/tutorials/subgraphAPI.md|  4 +-
 docs/static_site/src/pages/api/faq/new_op.md   |  4 +-
 include/mxnet/imperative.h | 10 +--
 include/mxnet/op_attr_types.h  |  2 +-
 src/c_api/c_api.cc |  2 +-
 src/c_api/c_api_function.cc|  4 +-
 src/c_api/c_api_symbolic.cc| 10 +--
 src/common/exec_utils.cc   | 10 +--
 src/executor/eliminate_common_expr_pass.cc | 28 
 src/executor/exec_pass.h   |  4 +-
 src/executor/graph_executor.cc | 16 ++---
 src/executor/infer_graph_attr_pass.cc  |  8 +--
 src/executor/pointwise_fusion_pass.cc  | 18 ++---
 src/executor/simple_partition_pass.h   | 14 ++--
 src/imperative/cached_op.cc|  4 +-
 src/imperative/cached_op.h |  6 +-
 src/imperative/imperative.cc   | 14 ++--
 src/nnvm/amp_infer_unknown.cc  | 12 ++--
 src/nnvm/gradient.cc   | 20 +++---
 src/nnvm/graph_editor.cc   |  6 +-
 src/nnvm/legacy_json_util.cc   |  2 +-
 src/nnvm/legacy_op_util.cc |  6 +-
 src/nnvm/low_precision_pass.cc | 42 ++--
 src/nnvm/node_op_util.h|  4 +-
 src/nnvm/plan_memory.cc|  2 -
 src/nnvm/tvm_bridge.cc |  4 +-
 src/operator/batch_norm_v1.cc  |  2 +-
 src/operator/contrib/amp_graph_pass.cc |  4 +-
 src/operator/contrib/roi_align.cc  |  2 +-
 src/operator/contrib/sync_batch_norm.cc|  2 +-
 src/operator/control_flow.cc   |  6 +-
 src/operator/custom/custom.cc  |  6 +-
 src/operator/elemwise_op_common.h  | 10 +--
 src/operator/fusion/fused_op.cc| 16 ++---
 src/operator/fusion/fused_op.h | 12 ++--
 src/operator/identity_attach_KL_sparse_reg.cc  |  2 +-
 src/operator/leaky_relu.cc |  2 +-
 src/operator/nn/activation.cc  |  2 +-
 src/operator/nn/batch_norm.cc  |  8 +--
 src/operator/nn/concat.cc  |  2 +-
 src/operator/nn/convolution.cc |  2 +-
 src/operator/nn/cudnn/cudnn_batch_norm.cc  |  2 +-
 src/operator/nn/deconvolution.cc   |  2 +-
 src/operator/nn/dropout.cc |  2 +-
 src/operator/nn/fully_connected.cc |  4 +-
 src/operator/nn/group_norm.cc  |  2 +-
 src/operator/nn/layer_norm.cc  |  2 +-
 src/operator/nn/lrn.cc |  2 +-
 src/operator/nn/softmax-inl.h  |  2 +-
 src/operator/nn/upsampling.cc  |  4 +-
 src/operator/numpy/np_broadcast_reduce_op_value.cc |  2 +-
 .../numpy/np_elemwise_broadcast_logic_op.cc|  6 +-
 src/operator/numpy/np_matrix_op.cc | 10 +--
 src/operator/numpy/np_where_op.cc  |  2 +-
 src/operator/operator_common.h | 16 ++---
 src/operator/quantization/quantize_graph_pass.cc   | 76 +++---
 src/operator/quantization/quantized_activation.cc  |  2 +-
 src/operator/quantization/quantized_batch_norm.cc  |  2 +-
 src/operator/quantization/quantized_concat.cc  |  2 +-
 src/operator/quantization/quantized_conv.cc|  2 +-
 .../quantization/quantized_elemwise_add.cc |  2 +-
 .../quantization/quantized_elemwise_mul.cc |  2 +-
 src/operator/quantization/quantized_flatten.cc |  2 +-
 .../quantization/quantized_fully_connected.cc  |  2 +-
 src/operator/quantization/quantized_indexing_op.cc |  2 +-
 src/operator/quantization/quantized_pooling.cc |  2 +-
 src/operator/random/sample_multinomial_op.cc   |  2 +-
 src/operator/random/sample_op.h| 62 +-
 src/operator/regression_output-inl.h   |  2 +-
 src/operator/rnn.cc|  2 +-
 src/operator/softmax_output.cc |  4 +-
 src/operator/subgraph/build_subgraph.cc|  9 +--
 src/operator/subgraph/common.h |  2 +-
 src/operator/subgraph/default_subgraph_property.cc |  4 +-
 .../subgraph

[incubator-mxnet] 02/02: upgrade enum according to updated tvm

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 2ef7de0ec0072828e788976d1ec44e9438b96383
Author: Yizhi Liu 
AuthorDate: Fri Jan 24 22:17:50 2020 -0800

upgrade enum according to updated tvm
---
 src/nnvm/plan_memory.cc  | 2 --
 src/nnvm/tvm_bridge.cc   | 4 ++--
 src/operator/numpy/np_elemwise_broadcast_logic_op.cc | 6 +++---
 src/operator/tensor/elemwise_unary_op_pow.cc | 4 ++--
 src/operator/tvmop/op_module.cc  | 2 +-
 5 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/src/nnvm/plan_memory.cc b/src/nnvm/plan_memory.cc
index c89eefc..e061dab 100644
--- a/src/nnvm/plan_memory.cc
+++ b/src/nnvm/plan_memory.cc
@@ -26,7 +26,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include "graph_algorithm.h"
@@ -36,7 +35,6 @@ namespace nnvm {
 namespace pass {
 
 namespace {
-  using namespace nnvm::top;
 // Return bytes of data flag.
 static int MXGetDTypeSize(int type_flag) {
   switch (type_flag) {
diff --git a/src/nnvm/tvm_bridge.cc b/src/nnvm/tvm_bridge.cc
index 0692998..17e05e3 100644
--- a/src/nnvm/tvm_bridge.cc
+++ b/src/nnvm/tvm_bridge.cc
@@ -73,7 +73,7 @@ class TVMFunctor {
 const NDArray& nd =
 static_cast(args.values[i].v_handle)[0];
 // We cannot set the value until
-type_codes_[i] = kArrayHandle;
+type_codes_[i] = kTVMDLTensorHandle;
 array_data_.push_back(nd);
 array_loc_.push_back(i);
 // check if there is read or mutate
@@ -86,7 +86,7 @@ class TVMFunctor {
   mutate_vars->push_back(nd.var());
 }
   } else {
-CHECK_LT(args.type_codes[i], kTVMType)
+CHECK_LT(args.type_codes[i], kTVMDataType)
 << "Only allow POD type in mxnet async call";
   }
 }
diff --git a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc 
b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
index 7e8951a..8395caf 100644
--- a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
+++ b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
@@ -95,7 +95,7 @@ struct TVMBinaryBroadcastCompute {
 values.resize(num_args);
 for (size_t i = 0; i < num_args; ++i) {
   tblobs[i] = PrependAxes(tblobs[i], ondim);
-  type_codes[i] = kArrayHandle;
+  type_codes[i] = kTVMDLTensorHandle;
   values[i].v_handle = const_cast(&(tblobs[i].dltensor()));
 }
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], tblobs.size());
@@ -200,7 +200,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values.resize(num_args);
 
 // input tensor setup
-type_codes[0] = kArrayHandle;
+type_codes[0] = kTVMDLTensorHandle;
 values[0].v_handle = const_cast(&(tblobs[0].dltensor()));
 
 // scalar param
@@ -208,7 +208,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values[1].v_float64 = nnvm::get(attrs.parsed);
 
 // output tensor
-type_codes[2] = kArrayHandle;
+type_codes[2] = kTVMDLTensorHandle;
 values[2].v_handle = const_cast(&(tblobs[1].dltensor()));
 
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], 3);
diff --git a/src/operator/tensor/elemwise_unary_op_pow.cc 
b/src/operator/tensor/elemwise_unary_op_pow.cc
index b4d3a4a..914cb820 100644
--- a/src/operator/tensor/elemwise_unary_op_pow.cc
+++ b/src/operator/tensor/elemwise_unary_op_pow.cc
@@ -224,7 +224,7 @@ The storage type of ``rsqrt`` output is always dense
 MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(
   _backward_rsqrt, unary_bwd)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
@@ -329,7 +329,7 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_rcbrt)
 ElemwiseBinaryOp::Compute>)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
diff --git a/src/operator/tvmop/op_module.cc b/src/operator/tvmop/op_module.cc
index b45df5d..cdd7321 100644
--- a/src/operator/tvmop/op_module.cc
+++ b/src/operator/tvmop/op_module.cc
@@ -94,7 +94,7 @@ void TVMOpModule::Call(const std::string _name,
   type_codes.resize(args.size());
   values.resize(args.size());
   for (size_t i = 0; i < args.size(); ++i) {
-type_codes[i] = kArrayHandle;
+type_codes[i] = kTVMDLTensorHandle;
 values[i].v_handle = const_cast(&(args[i].dltensor()));
   }
 



[incubator-mxnet] branch tvm_sync created (now 2ef7de0)

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 2ef7de0  upgrade enum according to updated tvm

This branch includes the following new commits:

 new dde46f5  sync latest tvm
 new 2ef7de0  upgrade enum according to updated tvm

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] branch tvm_sync created (now 2ef7de0)

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 2ef7de0  upgrade enum according to updated tvm

This branch includes the following new commits:

 new dde46f5  sync latest tvm
 new 2ef7de0  upgrade enum according to updated tvm

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] 02/02: upgrade enum according to updated tvm

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 2ef7de0ec0072828e788976d1ec44e9438b96383
Author: Yizhi Liu 
AuthorDate: Fri Jan 24 22:17:50 2020 -0800

upgrade enum according to updated tvm
---
 src/nnvm/plan_memory.cc  | 2 --
 src/nnvm/tvm_bridge.cc   | 4 ++--
 src/operator/numpy/np_elemwise_broadcast_logic_op.cc | 6 +++---
 src/operator/tensor/elemwise_unary_op_pow.cc | 4 ++--
 src/operator/tvmop/op_module.cc  | 2 +-
 5 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/src/nnvm/plan_memory.cc b/src/nnvm/plan_memory.cc
index c89eefc..e061dab 100644
--- a/src/nnvm/plan_memory.cc
+++ b/src/nnvm/plan_memory.cc
@@ -26,7 +26,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include "graph_algorithm.h"
@@ -36,7 +35,6 @@ namespace nnvm {
 namespace pass {
 
 namespace {
-  using namespace nnvm::top;
 // Return bytes of data flag.
 static int MXGetDTypeSize(int type_flag) {
   switch (type_flag) {
diff --git a/src/nnvm/tvm_bridge.cc b/src/nnvm/tvm_bridge.cc
index 0692998..17e05e3 100644
--- a/src/nnvm/tvm_bridge.cc
+++ b/src/nnvm/tvm_bridge.cc
@@ -73,7 +73,7 @@ class TVMFunctor {
 const NDArray& nd =
 static_cast(args.values[i].v_handle)[0];
 // We cannot set the value until
-type_codes_[i] = kArrayHandle;
+type_codes_[i] = kTVMDLTensorHandle;
 array_data_.push_back(nd);
 array_loc_.push_back(i);
 // check if there is read or mutate
@@ -86,7 +86,7 @@ class TVMFunctor {
   mutate_vars->push_back(nd.var());
 }
   } else {
-CHECK_LT(args.type_codes[i], kTVMType)
+CHECK_LT(args.type_codes[i], kTVMDataType)
 << "Only allow POD type in mxnet async call";
   }
 }
diff --git a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc 
b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
index 7e8951a..8395caf 100644
--- a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
+++ b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
@@ -95,7 +95,7 @@ struct TVMBinaryBroadcastCompute {
 values.resize(num_args);
 for (size_t i = 0; i < num_args; ++i) {
   tblobs[i] = PrependAxes(tblobs[i], ondim);
-  type_codes[i] = kArrayHandle;
+  type_codes[i] = kTVMDLTensorHandle;
   values[i].v_handle = const_cast(&(tblobs[i].dltensor()));
 }
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], tblobs.size());
@@ -200,7 +200,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values.resize(num_args);
 
 // input tensor setup
-type_codes[0] = kArrayHandle;
+type_codes[0] = kTVMDLTensorHandle;
 values[0].v_handle = const_cast(&(tblobs[0].dltensor()));
 
 // scalar param
@@ -208,7 +208,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values[1].v_float64 = nnvm::get(attrs.parsed);
 
 // output tensor
-type_codes[2] = kArrayHandle;
+type_codes[2] = kTVMDLTensorHandle;
 values[2].v_handle = const_cast(&(tblobs[1].dltensor()));
 
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], 3);
diff --git a/src/operator/tensor/elemwise_unary_op_pow.cc 
b/src/operator/tensor/elemwise_unary_op_pow.cc
index b4d3a4a..914cb820 100644
--- a/src/operator/tensor/elemwise_unary_op_pow.cc
+++ b/src/operator/tensor/elemwise_unary_op_pow.cc
@@ -224,7 +224,7 @@ The storage type of ``rsqrt`` output is always dense
 MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(
   _backward_rsqrt, unary_bwd)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
@@ -329,7 +329,7 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_rcbrt)
 ElemwiseBinaryOp::Compute>)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
diff --git a/src/operator/tvmop/op_module.cc b/src/operator/tvmop/op_module.cc
index b45df5d..cdd7321 100644
--- a/src/operator/tvmop/op_module.cc
+++ b/src/operator/tvmop/op_module.cc
@@ -94,7 +94,7 @@ void TVMOpModule::Call(const std::string _name,
   type_codes.resize(args.size());
   values.resize(args.size());
   for (size_t i = 0; i < args.size(); ++i) {
-type_codes[i] = kArrayHandle;
+type_codes[i] = kTVMDLTensorHandle;
 values[i].v_handle = const_cast(&(args[i].dltensor()));
   }
 



[incubator-mxnet] branch tvm_sync created (now 2ef7de0)

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 2ef7de0  upgrade enum according to updated tvm

This branch includes the following new commits:

 new dde46f5  sync latest tvm
 new 2ef7de0  upgrade enum according to updated tvm

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] 02/02: upgrade enum according to updated tvm

2020-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch tvm_sync
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 2ef7de0ec0072828e788976d1ec44e9438b96383
Author: Yizhi Liu 
AuthorDate: Fri Jan 24 22:17:50 2020 -0800

upgrade enum according to updated tvm
---
 src/nnvm/plan_memory.cc  | 2 --
 src/nnvm/tvm_bridge.cc   | 4 ++--
 src/operator/numpy/np_elemwise_broadcast_logic_op.cc | 6 +++---
 src/operator/tensor/elemwise_unary_op_pow.cc | 4 ++--
 src/operator/tvmop/op_module.cc  | 2 +-
 5 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/src/nnvm/plan_memory.cc b/src/nnvm/plan_memory.cc
index c89eefc..e061dab 100644
--- a/src/nnvm/plan_memory.cc
+++ b/src/nnvm/plan_memory.cc
@@ -26,7 +26,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include "graph_algorithm.h"
@@ -36,7 +35,6 @@ namespace nnvm {
 namespace pass {
 
 namespace {
-  using namespace nnvm::top;
 // Return bytes of data flag.
 static int MXGetDTypeSize(int type_flag) {
   switch (type_flag) {
diff --git a/src/nnvm/tvm_bridge.cc b/src/nnvm/tvm_bridge.cc
index 0692998..17e05e3 100644
--- a/src/nnvm/tvm_bridge.cc
+++ b/src/nnvm/tvm_bridge.cc
@@ -73,7 +73,7 @@ class TVMFunctor {
 const NDArray& nd =
 static_cast(args.values[i].v_handle)[0];
 // We cannot set the value until
-type_codes_[i] = kArrayHandle;
+type_codes_[i] = kTVMDLTensorHandle;
 array_data_.push_back(nd);
 array_loc_.push_back(i);
 // check if there is read or mutate
@@ -86,7 +86,7 @@ class TVMFunctor {
   mutate_vars->push_back(nd.var());
 }
   } else {
-CHECK_LT(args.type_codes[i], kTVMType)
+CHECK_LT(args.type_codes[i], kTVMDataType)
 << "Only allow POD type in mxnet async call";
   }
 }
diff --git a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc 
b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
index 7e8951a..8395caf 100644
--- a/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
+++ b/src/operator/numpy/np_elemwise_broadcast_logic_op.cc
@@ -95,7 +95,7 @@ struct TVMBinaryBroadcastCompute {
 values.resize(num_args);
 for (size_t i = 0; i < num_args; ++i) {
   tblobs[i] = PrependAxes(tblobs[i], ondim);
-  type_codes[i] = kArrayHandle;
+  type_codes[i] = kTVMDLTensorHandle;
   values[i].v_handle = const_cast(&(tblobs[i].dltensor()));
 }
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], tblobs.size());
@@ -200,7 +200,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values.resize(num_args);
 
 // input tensor setup
-type_codes[0] = kArrayHandle;
+type_codes[0] = kTVMDLTensorHandle;
 values[0].v_handle = const_cast(&(tblobs[0].dltensor()));
 
 // scalar param
@@ -208,7 +208,7 @@ struct TVMBinaryBroadcastScalarCompute {
 values[1].v_float64 = nnvm::get(attrs.parsed);
 
 // output tensor
-type_codes[2] = kArrayHandle;
+type_codes[2] = kTVMDLTensorHandle;
 values[2].v_handle = const_cast(&(tblobs[1].dltensor()));
 
 tvm::runtime::TVMArgs tvm_args([0], _codes[0], 3);
diff --git a/src/operator/tensor/elemwise_unary_op_pow.cc 
b/src/operator/tensor/elemwise_unary_op_pow.cc
index b4d3a4a..914cb820 100644
--- a/src/operator/tensor/elemwise_unary_op_pow.cc
+++ b/src/operator/tensor/elemwise_unary_op_pow.cc
@@ -224,7 +224,7 @@ The storage type of ``rsqrt`` output is always dense
 MXNET_OPERATOR_REGISTER_BINARY_WITH_SPARSE_CPU_DR(
   _backward_rsqrt, unary_bwd)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
@@ -329,7 +329,7 @@ MXNET_OPERATOR_REGISTER_BINARY(_backward_rcbrt)
 ElemwiseBinaryOp::Compute>)
 .set_attr("FGradient",
-  [](const nnvm::NodePtr& n, const std::vector& ograds) {
+  [](const nnvm::ObjectPtr& n, const std::vector& ograds) {
   // NodeEntry{n} : y_grad * f'(x)
   // n->inputs[0] : y_grad
   // n->inputs[1] : x
diff --git a/src/operator/tvmop/op_module.cc b/src/operator/tvmop/op_module.cc
index b45df5d..cdd7321 100644
--- a/src/operator/tvmop/op_module.cc
+++ b/src/operator/tvmop/op_module.cc
@@ -94,7 +94,7 @@ void TVMOpModule::Call(const std::string _name,
   type_codes.resize(args.size());
   values.resize(args.size());
   for (size_t i = 0; i < args.size(); ++i) {
-type_codes[i] = kArrayHandle;
+type_codes[i] = kTVMDLTensorHandle;
 values[i].v_handle = const_cast(&(args[i].dltensor()));
   }
 



[incubator-mxnet] branch master updated (bd7eedf -> 0f04b0d)

2020-01-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from bd7eedf  Fix #17164 symbolblock with BatchNorm inside during cast to 
fp16 (#17212)
 add 0f04b0d  [tvmop] support cuda multi-arch compilation (#17214)

No new revisions were added by this update.

Summary of changes:
 3rdparty/tvm |  2 +-
 CMakeLists.txt   | 18 --
 cmake/BuildTVM.cmake |  3 +++
 contrib/tvmop/compile.py | 24 
 4 files changed, 24 insertions(+), 23 deletions(-)



[incubator-mxnet] branch master updated (33602e5 -> b972406)

2019-11-14 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 33602e5  Update TVM submodule (#16777)
 add b972406  clean TVM (#16814)

No new revisions were added by this update.

Summary of changes:
 Makefile | 1 +
 1 file changed, 1 insertion(+)



[incubator-mxnet] branch master updated (7e21bda -> 33602e5)

2019-11-14 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7e21bda  Fix nightly build (#16773)
 add 33602e5  Update TVM submodule (#16777)

No new revisions were added by this update.

Summary of changes:
 .gitmodules  | 2 +-
 3rdparty/tvm | 2 +-
 Makefile | 2 +-
 cmake/BuildTVM.cmake | 3 +++
 4 files changed, 6 insertions(+), 3 deletions(-)



[incubator-mxnet] branch master updated (6997691 -> 36bab1c)

2019-09-01 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 6997691  [Dev] update ps-lite dependency (#15936)
 add 36bab1c  Fix flaky clojure profile test (#16058)

No new revisions were added by this update.

Summary of changes:
 .../examples/profiler/test/core_test.clj   |   3 +-
 .../profiler/test/profile-matmul-20iter.json.ref   | 271 -
 contrib/clojure-package/integration-tests.sh   |   2 +-
 3 files changed, 2 insertions(+), 274 deletions(-)
 delete mode 100644 
contrib/clojure-package/examples/profiler/test/profile-matmul-20iter.json.ref



[incubator-mxnet] branch enable-tvm-op deleted (was 74b4799)

2019-08-16 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch enable-tvm-op
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 was 74b4799  enable tvm_op for ci

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[incubator-mxnet] 01/01: enable tvm_op for ci

2019-08-16 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch enable-tvm-op
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 74b47994014fc0349ccdcebcc4967d9e3a95f5e3
Author: Yizhi Liu 
AuthorDate: Wed Aug 14 16:24:20 2019 +0800

enable tvm_op for ci
---
 ci/docker/install/ubuntu_python.sh   |  2 +-
 ci/docker/runtime_functions.sh   | 12 
 tests/python/unittest/test_tvm_op.py | 15 +++
 3 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/ci/docker/install/ubuntu_python.sh 
b/ci/docker/install/ubuntu_python.sh
index 2ca0cce..b8626d3 100755
--- a/ci/docker/install/ubuntu_python.sh
+++ b/ci/docker/install/ubuntu_python.sh
@@ -31,4 +31,4 @@ python3 get-pip.py
 python2 get-pip.py
 
 pip2 install nose cpplint==1.3.0 'numpy>1.16.0,<2.0.0' nose-timer 
'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3 Cython==0.29.7
-pip3 install nose cpplint==1.3.0 pylint==2.3.1 'numpy>1.16.0,<2.0.0' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3 
Cython==0.29.7
+pip3 install nose cpplint==1.3.0 pylint==2.3.1 'numpy>1.16.0,<2.0.0' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3 
Cython==0.29.7 decorator
diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh
index 2518f4c..58fb350 100755
--- a/ci/docker/runtime_functions.sh
+++ b/ci/docker/runtime_functions.sh
@@ -374,6 +374,7 @@ build_ubuntu_cpu_openblas() {
 build_ccache_wrappers
 make \
 DEV=1 \
+USE_TVM_OP=1  \
 ENABLE_TESTCOVERAGE=1 \
 USE_CPP_PACKAGE=1 \
 USE_BLAS=openblas \
@@ -395,6 +396,7 @@ build_ubuntu_cpu_mkl() {
 ENABLE_TESTCOVERAGE=1 \
 USE_CPP_PACKAGE=1 \
 USE_BLAS=mkl  \
+USE_TVM_OP=1  \
 USE_MKLDNN=0  \
 USE_INTEL_PATH=/opt/intel \
 USE_DIST_KVSTORE=1\
@@ -412,6 +414,7 @@ build_ubuntu_cpu_cmake_debug() {
 -DCMAKE_C_COMPILER_LAUNCHER=ccache \
 -DENABLE_TESTCOVERAGE=ON \
 -DUSE_CUDA=OFF \
+-DUSE_TVM_OP=ON \
 -DUSE_MKL_IF_AVAILABLE=OFF \
 -DUSE_OPENMP=OFF \
 -DUSE_OPENCV=ON \
@@ -559,6 +562,7 @@ build_ubuntu_cpu_mkldnn() {
 DEV=1 \
 ENABLE_TESTCOVERAGE=1 \
 USE_CPP_PACKAGE=1 \
+USE_TVM_OP=1  \
 USE_BLAS=openblas \
 USE_SIGNAL_HANDLER=1  \
 -j$(nproc)
@@ -573,6 +577,7 @@ build_ubuntu_cpu_mkldnn_mkl() {
 DEV=1 \
 ENABLE_TESTCOVERAGE=1 \
 USE_CPP_PACKAGE=1 \
+USE_TVM_OP=1  \
 USE_BLAS=mkl  \
 USE_SIGNAL_HANDLER=1  \
 -j$(nproc)
@@ -657,6 +662,7 @@ build_ubuntu_gpu_mkldnn() {
 USE_CUDA=1\
 USE_CUDA_PATH=/usr/local/cuda \
 USE_CUDNN=1   \
+USE_TVM_OP=1  \
 CUDA_ARCH="$CI_CUDA_COMPUTE_CAPABILITIES" \
 USE_SIGNAL_HANDLER=1  \
 -j$(nproc)
@@ -674,6 +680,7 @@ build_ubuntu_gpu_mkldnn_nocudnn() {
 USE_CUDA=1\
 USE_CUDA_PATH=/usr/local/cuda \
 USE_CUDNN=0   \
+USE_TVM_OP=1  \
 CUDA_ARCH="$CI_CUDA_COMPUTE_CAPABILITIES" \
 USE_SIGNAL_HANDLER=1  \
 -j$(nproc)
@@ -690,6 +697,7 @@ build_ubuntu_gpu_cuda101_cudnn7() {
 USE_CUDA=1\
 USE_CUDA_PATH=/usr/local/cuda \
 USE_CUDNN=1   \
+USE_TVM_OP=1  \
 USE_CPP_PACKAGE=1 \
 USE_DIST_KVSTORE=1\
 CUDA_ARCH="$CI_CUDA_COMPUTE_CAPABILITIES" \
@@ -733,6 +741,7 @@ build_ubuntu_gpu_cmake_mkldnn() {
 -DENABLE_TESTCOVERAGE=ON\
 -DUSE_CUDA=1\
 -DUSE_CUDNN=1   \
+-DUSE_TVM_OP=1  \
 -DUSE_MKLML_MKL=1   \
 -DCMAKE_BUILD_TYPE=Release  \
 -DCUDA_ARCH_NAME=Manual \
@@ -758,6 +767,7 @@ build_ubuntu_gpu_cmake() {
 -DENABLE_TESTCOVERAGE=ON\
 -DUSE_CUDA=ON   \
 -DUSE_CUDNN=ON  \
+-DUSE_TVM_OP=ON \
 -DUSE_MKL_IF_AVAILABLE=OFF  \
 -

[incubator-mxnet] branch enable-tvm-op created (now 74b4799)

2019-08-16 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch enable-tvm-op
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at 74b4799  enable tvm_op for ci

This branch includes the following new commits:

 new 74b4799  enable tvm_op for ci

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-mxnet] branch master updated (05f3ae1 -> 67daae7)

2019-08-13 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 05f3ae1  Large Index Support for Slice (#15593)
 add 67daae7  tvm infra for op attrs (#15854)

No new revisions were added by this update.

Summary of changes:
 contrib/tvmop/compile.py |  4 ++--
 contrib/tvmop/opdef.py   | 12 
 2 files changed, 10 insertions(+), 6 deletions(-)



[incubator-mxnet] branch master updated: fix tvm cmake (#15781)

2019-08-07 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 07eb482  fix tvm cmake (#15781)
07eb482 is described below

commit 07eb482670c5e7891b6baa0184f361a9b9621786
Author: Haozheng Fan 
AuthorDate: Thu Aug 8 07:56:26 2019 +0800

fix tvm cmake (#15781)
---
 CMakeLists.txt  | 2 +-
 cmake/BuildTVM.cmake| 2 +-
 src/operator/contrib/tvmop/ufunc.cc | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 7c479f7..b33d195 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -751,7 +751,7 @@ if(USE_TVM_OP)
   add_custom_command(TARGET mxnet POST_BUILD
 COMMAND ${CMAKE_COMMAND} -E env
   
PYTHONPATH="${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/tvm/python:${CMAKE_CURRENT_SOURCE_DIR}/3rdparty/tvm/topi/python:${CMAKE_CURRENT_SOURCE_DIR}/contrib"
-  LD_LIBRARY_PATH="${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/build"
+  LD_LIBRARY_PATH="${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm"
   ${Python3_EXECUTABLE} 
${CMAKE_CURRENT_SOURCE_DIR}/contrib/tvmop/compile.py 
-o${CMAKE_CURRENT_BINARY_DIR}/libtvmop.so
 )
 endif()
diff --git a/cmake/BuildTVM.cmake b/cmake/BuildTVM.cmake
index ad8517c..db8b33b 100644
--- a/cmake/BuildTVM.cmake
+++ b/cmake/BuildTVM.cmake
@@ -16,7 +16,7 @@
 # under the License.
 
 message(STATUS "Prepare external packages for TVM...")
-execute_process(COMMAND 
"${CMAKE_CURRENT_SOURCE_DIR}/contrib/tvmop/prepare_tvm.sh")
+execute_process(COMMAND "sh" 
"${CMAKE_CURRENT_SOURCE_DIR}/contrib/tvmop/prepare_tvm.sh")
 
 # Whether enable ROCM runtime
 #
diff --git a/src/operator/contrib/tvmop/ufunc.cc 
b/src/operator/contrib/tvmop/ufunc.cc
index faba671..3475a21 100644
--- a/src/operator/contrib/tvmop/ufunc.cc
+++ b/src/operator/contrib/tvmop/ufunc.cc
@@ -56,10 +56,10 @@ NNVM_REGISTER_OP(_contrib_tvm_vadd)
 .add_argument("b", "NDArray-or-Symbol", "second input")
 .set_attr("FInferShape", BinaryBroadcastShape)
 .set_attr("FInferType", mxnet::op::ElemwiseType<2, 1>)
-.set_attr("FCompute", 
mxnet::op::TVMBroadcastCompute)
 #if MXNET_USE_CUDA
-.set_attr("FCompute", 
mxnet::op::TVMBroadcastCompute);
+.set_attr("FCompute", 
mxnet::op::TVMBroadcastCompute)
 #endif  // MXNET_USE_CUDA
+.set_attr("FCompute", 
mxnet::op::TVMBroadcastCompute);
 
 }  // namespace op
 }  // namespace mxnet



[incubator-mxnet] branch master updated (e0ff3c3 -> cf6e8cb)

2018-12-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e0ff3c3  Updated docs for randint operator (#13541)
 add cf6e8cb  Chi_square_check for discrete distribution fix (#13543)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/test_utils.py | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)



[incubator-mxnet] branch java-api updated (1c54aaa -> bb7bbaf)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1c54aaa  Merge branch 'master' into java-api
 add bb7bbaf  [MXNET-1182] Predictor example (#13237)

No new revisions were added by this update.

Summary of changes:
 .../run_predictor_java_example.sh} |   9 +-
 .../javaapi/infer/predictor/PredictorExample.java  | 200 +
 .../javaapi/infer/predictor/README.md  |  61 +++
 .../org/apache/mxnet/infer/javaapi/Predictor.scala |  13 ++
 4 files changed, 277 insertions(+), 6 deletions(-)
 copy scala-package/examples/scripts/infer/{objectdetector/run_ssd_example.sh 
=> predictor/run_predictor_java_example.sh} (88%)
 create mode 100644 
scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/PredictorExample.java
 create mode 100644 
scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md



[incubator-mxnet] branch java-api updated (f52b9aa -> 1c54aaa)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f52b9aa  [MXNET-1187] Added Java SSD Inference Tutorial for website 
(#13201)
 add e655f62  [Example] Fixing Gradcam implementation (#13196)
 add 7dfcc94  Fix test failure due to hybridize call in 
test_gluon_rnn.test_layer_fill_shape (#13043)
 add f79bb18  Addressed sphinx build issue (#13246)
 add f5ba267  Add gauss err function operator (#13229)
 add b8e36e0  Add Turing and Volta support to arch_name (#13168)
 add 2eb76b5  Bugfix in ci/docker_cache.py (#13249)
 add 8cb73ef  Fix scaladoc build errors (#13189)
 add ead3af2  Add missing documentations for getnnz (#13128)
 add 100a4aa  Addressed ONNX module documentation warnings and added notes 
for short-form representation (#13259)
 add 7541021  Manually track num_max_thread (#12380)
 add cf991ff  adding unit test for MKLDNN FullyConnected operator (#12985)
 add 339e085  Doc fixes (#13256)
 add 1ef83c9  fix train mnist for inception-bn and resnet (#13239)
 add e7f9770  Fix a bug in index_copy (#13218)
 add 0259254  Addressed doc issues (#13165)
 add 226f9cb  Force APT cache update before executing install (#13285)
 add 8ac7fb9  [Example] Gradcam consolidation in tutorial (#13255)
 add 97fdfd9  [MXNET-1203] Tutorial infogan  (#13144)
 add c78f89f  Remove obsolete memory cost example (#13235)
 add 1c54aaa  Merge branch 'master' into java-api

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md|   1 +
 ci/docker/install/ubuntu_caffe.sh  |   1 +
 ci/docker/install/ubuntu_clang.sh  |   2 +
 ci/docker/install/ubuntu_core.sh   |   2 +-
 ci/docker/install/ubuntu_docs.sh   |   1 +
 ci/docker/install/ubuntu_emscripten.sh |   1 +
 ci/docker/install/ubuntu_gcc8.sh   |   2 +-
 ci/docker/install/ubuntu_llvm.sh   |   3 +-
 ci/docker/install/ubuntu_nightly_tests.sh  |   2 +-
 ci/docker/install/ubuntu_npm_blc.sh|   2 +-
 ci/docker/install/ubuntu_nvidia.sh |   1 +
 ci/docker/install/ubuntu_onnx.sh   |   1 +
 ci/docker/install/ubuntu_perl.sh   |   1 +
 ci/docker/install/ubuntu_python.sh |   1 +
 ci/docker/install/ubuntu_r.sh  |   2 +-
 ci/docker/install/ubuntu_rat.sh|   2 +-
 ci/docker/install/ubuntu_scala.sh  |   6 +-
 ci/docker/install/ubuntu_tutorials.sh  |   1 +
 ci/docker_cache.py |   2 +-
 cmake/FirstClassLangCuda.cmake |   6 +
 docs/_static/js/auto_module_index.js   |  16 +-
 docs/api/python/ndarray/ndarray.md |   8 +-
 docs/api/python/ndarray/random.md  |   1 +
 docs/api/python/symbol/symbol.md   |   1 +
 docs/architecture/note_memory.md   |  15 +-
 docs/conf.py   |   2 +-
 docs/mxdoc.py  |   9 +-
 .../vision}/cnn_visualization/gradcam.py   |   4 +-
 docs/tutorials/gluon/info_gan.md   | 437 +
 docs/tutorials/index.md|   1 +
 docs/tutorials/vision/cnn_visualization.md |   6 +-
 example/cnn_visualization/README.md|  17 -
 example/cnn_visualization/gradcam_demo.py  | 110 --
 example/cnn_visualization/vgg.py   |  84 
 example/image-classification/train_mnist.py|   1 +
 example/memcost/Makefile   |  38 --
 example/memcost/README.md  |  30 --
 example/memcost/inception_memcost.py   | 107 -
 python/mxnet/contrib/onnx/mx2onnx/export_model.py  |   5 +
 python/mxnet/contrib/onnx/onnx2mx/import_model.py  |   9 +
 .../mxnet/contrib/onnx/onnx2mx/import_to_gluon.py  |   5 +
 python/mxnet/gluon/nn/basic_layers.py  |   9 +-
 python/mxnet/ndarray/ndarray.py|   2 +-
 scala-package/core/pom.xml |   4 -
 .../src/main/scala/org/apache/mxnet/Context.scala  |   2 +
 .../src/main/scala/org/apache/mxnet/Executor.scala |   5 -
 .../core/src/main/scala/org/apache/mxnet/IO.scala  |   7 +-
 .../src/main/scala/org/apache/mxnet/KVStore.scala  |   2 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |   1 +
 .../main/scala/org/apache/mxnet/Optimizer.scala|   2 +-
 .../scala/org/apache/mxnet/ResourceScope.scala |   6 +-
 .../src/main/scala/org/apache/mxnet/Symbol.scala   |   1 +
 .../scala/org/apache/mxnet/Visualization.scala |   1 +
 .../scala/org/apache/mxnet/io/MXDataIter.scala |   4 +-
 .../scala/org/apache/mxnet/io

[incubator-mxnet] branch java-api updated (3ec9030 -> f52b9aa)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3ec9030  add defaults and clean up the tests (#13295)
 add f52b9aa  [MXNET-1187] Added Java SSD Inference Tutorial for website 
(#13201)

No new revisions were added by this update.

Summary of changes:
 docs/tutorials/index.md  |   1 +
 docs/tutorials/java/ssd_inference.md | 186 +++
 tests/tutorials/test_sanity_tutorials.py |   3 +-
 3 files changed, 189 insertions(+), 1 deletion(-)
 create mode 100644 docs/tutorials/java/ssd_inference.md



[incubator-mxnet] branch java-api updated (7d51241 -> 3ec9030)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 7d51241  [MXNET-1181] Added command line alternative to IntelliJ in 
install instructions (#13267)
 add 3ec9030  add defaults and clean up the tests (#13295)

No new revisions were added by this update.

Summary of changes:
 Makefile   |  2 +-
 scala-package/core/pom.xml | 10 --
 scala-package/examples/pom.xml | 10 --
 scala-package/infer/pom.xml| 10 --
 4 files changed, 13 insertions(+), 19 deletions(-)



[incubator-mxnet] branch java-api updated (52bead0 -> 7d51241)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 52bead0  clean up the NDArray follow the comments (#13281)
 add 7d51241  [MXNET-1181] Added command line alternative to IntelliJ in 
install instructions (#13267)

No new revisions were added by this update.

Summary of changes:
 docs/tutorials/java/mxnet_java_on_intellij.md  |  15 ++-
 .../scala/mxnet_java_install_and_run_examples.md   | 123 -
 2 files changed, 14 insertions(+), 124 deletions(-)
 delete mode 100644 docs/tutorials/scala/mxnet_java_install_and_run_examples.md



[incubator-mxnet] branch java-api updated (218a7a9 -> 52bead0)

2018-11-15 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 218a7a9  Addressing PR feedback for merging Java API into master 
(#13277)
 add 52bead0  clean up the NDArray follow the comments (#13281)

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/mxnet/javaapi/NDArray.scala   | 138 +++--
 1 file changed, 74 insertions(+), 64 deletions(-)



[incubator-mxnet] branch java-api updated (1bb5b7f -> fb4cad9)

2018-11-13 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch java-api
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 1bb5b7f  [MXNET-1041] Add Java benchmark (#13095)
 add fb4cad9  [MXNET-918] [Introduce Random module / Refact code generation 
(#13038)][Cherry pick]  (#13242)

No new revisions were added by this update.

Summary of changes:
 .../benchmark/ObjectDetectionBenchmark.java|   2 +-
 .../scala/org/apache/mxnet/APIDocGenerator.scala   | 315 +
 .../scala/org/apache/mxnet/GeneratorBase.scala | 163 +++
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 263 ++---
 .../main/scala/org/apache/mxnet/SymbolMacro.scala  | 250 +---
 .../apache/mxnet/javaapi/JavaNDArrayMacro.scala|  95 +--
 6 files changed, 454 insertions(+), 634 deletions(-)
 create mode 100644 
scala-package/macros/src/main/scala/org/apache/mxnet/GeneratorBase.scala



[incubator-mxnet] branch master updated: [MXNET-918] Introduce Random module / Refact code generation (#13038)

2018-11-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 53c5a72  [MXNET-918] Introduce Random module / Refact code generation 
(#13038)
53c5a72 is described below

commit 53c5a72c1f28dad284b7f6d7699cca6f0eec776a
Author: mathieu 
AuthorDate: Mon Nov 5 18:55:45 2018 +0100

[MXNET-918] Introduce Random module / Refact code generation (#13038)

* refactor code gen

* remove xxxAPIMacroBase (overkill)

* CI errors / scala-style

* PR review comments
---
 .../scala/org/apache/mxnet/APIDocGenerator.scala   | 234 --
 .../scala/org/apache/mxnet/GeneratorBase.scala | 157 
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 263 +++--
 .../main/scala/org/apache/mxnet/SymbolMacro.scala  | 250 ++--
 4 files changed, 411 insertions(+), 493 deletions(-)

diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
index b4efa65..bfa378e 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
@@ -17,178 +17,154 @@
 
 package org.apache.mxnet
 
-import org.apache.mxnet.init.Base._
-import org.apache.mxnet.utils.CToScalaUtils
 import java.io._
 import java.security.MessageDigest
 
-import scala.collection.mutable.{ArrayBuffer, ListBuffer}
+import scala.collection.mutable.ListBuffer
 
 /**
   * This object will generate the Scala documentation of the new Scala API
   * Two file namely: SymbolAPIBase.scala and NDArrayAPIBase.scala
   * The code will be executed during Macros stage and file live in Core stage
   */
-private[mxnet] object APIDocGenerator{
-  case class absClassArg(argName : String, argType : String, argDesc : String, 
isOptional : Boolean)
-  case class absClassFunction(name : String, desc : String,
-   listOfArgs: List[absClassArg], returnType : String)
+private[mxnet] object APIDocGenerator extends GeneratorBase {
 
-
-  def main(args: Array[String]) : Unit = {
+  def main(args: Array[String]): Unit = {
 val FILE_PATH = args(0)
 val hashCollector = ListBuffer[String]()
-hashCollector += absClassGen(FILE_PATH, true)
-hashCollector += absClassGen(FILE_PATH, false)
+hashCollector += typeSafeClassGen(FILE_PATH, true)
+hashCollector += typeSafeClassGen(FILE_PATH, false)
 hashCollector += nonTypeSafeClassGen(FILE_PATH, true)
 hashCollector += nonTypeSafeClassGen(FILE_PATH, false)
 val finalHash = hashCollector.mkString("\n")
   }
 
-  def MD5Generator(input : String) : String = {
+  def MD5Generator(input: String): String = {
 val md = MessageDigest.getInstance("MD5")
 md.update(input.getBytes("UTF-8"))
 val digest = md.digest()
 org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString(digest)
   }
 
-  def absClassGen(FILE_PATH : String, isSymbol : Boolean) : String = {
-// scalastyle:off
-val absClassFunctions = getSymbolNDArrayMethods(isSymbol)
-// Defines Operators that should not generated
-val notGenerated = Set("Custom")
-// TODO: Add Filter to the same location in case of refactor
-val absFuncs = absClassFunctions.filterNot(_.name.startsWith("_"))
-  .filterNot(ele => notGenerated.contains(ele.name))
-  .map(absClassFunction => {
-  val scalaDoc = generateAPIDocFromBackend(absClassFunction)
-  val defBody = generateAPISignature(absClassFunction, isSymbol)
-  s"$scalaDoc\n$defBody"
-})
-val packageName = if (isSymbol) "SymbolAPIBase" else "NDArrayAPIBase"
-val apacheLicence = "/*\n* Licensed to the Apache Software Foundation 
(ASF) under one or more\n* contributor license agreements.  See the NOTICE file 
distributed with\n* this work for additional information regarding copyright 
ownership.\n* The ASF licenses this file to You under the Apache License, 
Version 2.0\n* (the \"License\"); you may not use this file except in 
compliance with\n* the License.  You may obtain a copy of the License at\n*\n*  
  http://www.apache.org/licenses/LICE [...]
-val scalaStyle = "// scalastyle:off"
-val packageDef = "package org.apache.mxnet"
-val imports = "import org.apache.mxnet.annotation.Experimental"
-val absClassDef = s"abstract class $packageName"
-val finalStr = 
s"$apacheLicence\n$scalaStyle\n$packageDef\n$imports\n$absClassDef 
{\n${absFuncs.mkString("\n")}\n}"
-val pw = new PrintWriter(new File(FILE_PATH + s"$packageName.scala"))
-pw.write(finalStr)
-   

[incubator-mxnet] branch master updated (3c3506f -> f0140b3)

2018-10-09 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 3c3506f  Add resnet50-v1 to benchmark_score (#12595)
 add f0140b3  [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack 
(#12758)

No new revisions were added by this update.

Summary of changes:
 ...mage_inference_bm.sh => run_text_charrnn_bm.sh} | 22 +++---
 .../org/apache/mxnetexamples/benchmark/README.md   | 36 +-
 .../benchmark/ScalaInferenceBenchmark.scala| 19 +-
 .../objectdetector/SSDClassifierExample.scala  | 79 +++---
 .../org/apache/mxnetexamples/rnn/TestCharRnn.scala | 71 ---
 .../benchmark/ScalaInferenceBenchmarkSuite.scala   | 52 ++
 6 files changed, 244 insertions(+), 35 deletions(-)
 copy scala-package/examples/scripts/benchmark/{run_image_inference_bm.sh => 
run_text_charrnn_bm.sh} (82%)



[incubator-mxnet] branch v1.3.x updated: update NDCollector doc (#12349)

2018-08-24 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.3.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.3.x by this push:
 new 9faecd4  update NDCollector doc (#12349)
9faecd4 is described below

commit 9faecd4092019281bfa341454663c58d06bd3e24
Author: Yizhi Liu 
AuthorDate: Fri Aug 24 20:30:31 2018 -0700

update NDCollector doc (#12349)

* explain the details for Scala Experimental
---
 .../core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala   | 4 
 .../src/main/scala/org/apache/mxnet/annotation/Experimental.scala | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
index 3952b73..0b7f9af 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
@@ -133,6 +133,10 @@ class NDArrayCollector private(private val autoDispose: 
Boolean = true,
* If the return type of scope is NDArray or 
NDArrayFuncReturn,
* it is smart enough NOT to collect or dispose the returned NDArray. 
* However in other cases, it is users' responsibility NOT to leak allocated 
NDArrays outside.
+   * 
+   * We might switch to try -with-resources statement (by AutoCloseable in 
Java 1.7+)
+   * and deprecate this method later, thus it is marked as Experimental.
+   *
* @param codeBlock code block to be executed within the scope.
* @tparam T return type of the function codeBlock.
* @return The result of function codeBlock.
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
 
b/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
index 147d651..d63194d 100644
--- 
a/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
+++ 
b/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
@@ -21,7 +21,7 @@ import java.lang.annotation.{ElementType, Retention, Target, 
_}
 
 /**
   * Experimental: there is a comparably high chance that
-  * the API will undergo some kind of changes
+  * the API will be changed or removed.
   */
 @Retention(RetentionPolicy.RUNTIME)
 @Target(Array(ElementType.TYPE, ElementType.FIELD, ElementType.METHOD, 
ElementType.PARAMETER,



[incubator-mxnet] branch master updated (15e43c0 -> 5b37cf6)

2018-08-24 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 15e43c0  Fall back when sparse arrays are passed to MKLDNN-enabled 
operators (#11664)
 add 5b37cf6  explain the details for Scala Experimental (#12348)

No new revisions were added by this update.

Summary of changes:
 .../core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala   | 4 
 .../src/main/scala/org/apache/mxnet/annotation/Experimental.scala | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated: Updating R client docs (#11954)

2018-08-01 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 061076d  Updating R client docs (#11954)
061076d is described below

commit 061076dc83fbd26bc88911c3b0dbcbee81095d1f
Author: Sergey Sokolov 
AuthorDate: Wed Aug 1 15:24:35 2018 -0700

Updating R client docs (#11954)

* Updating R client docs

* Forcing build
---
 R-package/R/mlp.R | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/R-package/R/mlp.R b/R-package/R/mlp.R
index ecc3099..aa510d1 100644
--- a/R-package/R/mlp.R
+++ b/R-package/R/mlp.R
@@ -8,7 +8,7 @@
 #' @param activation either a single string or a vector containing the names 
of the activation functions.
 #' @param out_activation a single string containing the name of the output 
activation function.
 #' @param ctx whether train on cpu (default) or gpu.
-#' @param eval_metric the evaluation metric/
+#' @param eval.metric the evaluation metric/
 #' @param ... other parameters passing to \code{mx.model.FeedForward.create}/
 #' 
 #' @examples



[incubator-mxnet] branch master updated (4b8ab63 -> 1031fe1)

2018-07-18 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 4b8ab63  MXNet docs change for 1.2.1 release (#11791)
 add 1031fe1  [MXNET-600][Scala] NDArray auto-collector (#11751)

No new revisions were added by this update.

Summary of changes:
 .../src/main/scala/org/apache/mxnet/Executor.scala |   2 +-
 .../src/main/scala/org/apache/mxnet/Monitor.scala  |   2 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |   9 +-
 .../scala/org/apache/mxnet/NDArrayCollector.scala  | 159 +
 .../src/main/scala/org/apache/mxnet/Operator.scala |   4 +-
 .../scala/org/apache/mxnet/io/NDArrayIter.scala|  17 +--
 .../org/apache/mxnet/NDArrayCollectorSuite.scala   |  71 +
 .../main/native/org_apache_mxnet_native_c_api.cc   |   2 +-
 8 files changed, 251 insertions(+), 15 deletions(-)
 create mode 100644 
scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
 create mode 100644 
scala-package/core/src/test/scala/org/apache/mxnet/NDArrayCollectorSuite.scala



[incubator-mxnet] branch v1.2.0-java updated: add NDArrayCollector

2018-07-13 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.2.0-java by this push:
 new 372b396  add NDArrayCollector
372b396 is described below

commit 372b396ace4b1700b18503351268a06107c548d6
Author: Yizhi Liu 
AuthorDate: Fri Jul 13 11:18:24 2018 -0700

add NDArrayCollector
---
 .../src/main/scala/org/apache/mxnet/Executor.scala |   2 +-
 .../src/main/scala/org/apache/mxnet/Monitor.scala  |   2 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |   8 +-
 .../scala/org/apache/mxnet/NDArrayCollector.scala  | 157 +
 .../src/main/scala/org/apache/mxnet/Operator.scala |   4 +-
 .../scala/org/apache/mxnet/io/NDArrayIter.scala|  17 +--
 .../org/apache/mxnet/NDArrayCollectorSuite.scala   |  67 +
 7 files changed, 243 insertions(+), 14 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/Executor.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/Executor.scala
index 2f79b58..181b232 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/Executor.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/Executor.scala
@@ -167,7 +167,7 @@ class Executor private[mxnet](private[mxnet] val handle: 
ExecutorHandle,
   private def getOutputs: Array[NDArray] = {
 val ndHandles = ArrayBuffer[NDArrayHandle]()
 checkCall(_LIB.mxExecutorOutputs(handle, ndHandles))
-ndHandles.toArray.map(new NDArray(_))
+ndHandles.toArray.map(new NDArray(_, addToCollector = false))
   }
 
   /**
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/Monitor.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/Monitor.scala
index 8e53d65..c8a251d 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/Monitor.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/Monitor.scala
@@ -51,7 +51,7 @@ class Monitor(
 override def invoke(name: String, arr: NDArrayHandle): Unit = {
   // wrapper for executor callback
   if (activated) {
-val array = new NDArray(arr, writable = false)
+val array = new NDArray(arr, writable = false, addToCollector = false)
 val elem = (step, name, statFunc(array))
 queue += elem
   }
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index e8c687e..844621d1 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -547,11 +547,15 @@ object NDArray {
  * 
  */
 class NDArray private[mxnet](private[mxnet] val handle: NDArrayHandle,
- val writable: Boolean = true) extends 
WarnIfNotDisposed {
+ val writable: Boolean = true,
+ addToCollector: Boolean = true) extends 
WarnIfNotDisposed {
+  if (addToCollector) {
+NDArrayCollector.collect(this)
+  }
   // record arrays who construct this array instance
   // we use weak reference to prevent gc blocking
   private[mxnet] val dependencies = mutable.HashMap.empty[Long, 
WeakReference[NDArray]]
-  private var disposed = false
+  @volatile private var disposed = false
   def isDisposed: Boolean = disposed
 
   def serialize(): Array[Byte] = {
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
new file mode 100644
index 000..b5ae44b
--- /dev/null
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayCollector.scala
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.mxnet
+
+import org.slf4j.LoggerFactory
+
+import scala.annotation.varargs
+import scala.collection.mutable
+
+/**
+  *  A collector to store NDArrays.
+  *  It provides a scope, NDArrays allocated in the scope can either 
+  *  - be disposed automatically when the code block finishes, or 
+  *  - simply be collected for future usage.
+  *  
+  *

[incubator-mxnet] branch v1.2.0-java updated: remove varargs for NDArray operators

2018-06-13 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.2.0-java by this push:
 new 455150c  remove varargs for NDArray operators
455150c is described below

commit 455150c8ad3d53916b4e5523c7ea91dc4df0fe9e
Author: Yizhi Liu 
AuthorDate: Wed Jun 13 18:01:09 2018 -0700

remove varargs for NDArray operators
---
 scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
index c4d16bc..036b9ec 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
@@ -70,7 +70,7 @@ private[mxnet] object NDArrayMacro {
 // def transpose(kwargs: Map[String, Any] = null)(args: Any*)
 q"def $termName(kwargs: Map[String, Any] = null)(args: Any*) = 
{genericNDArrayFunctionInvoke($funcName, args, kwargs)}",
 // def transpose(args: Any*)
-q"@scala.annotation.varargs def $termName(args: Any*) = 
{genericNDArrayFunctionInvoke($funcName, args, null)}"
+q"def $termName(args: Any*) = {genericNDArrayFunctionInvoke($funcName, 
args, null)}"
 // scalastyle:on
   )
 }

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[incubator-mxnet] branch master updated: [MXNET-386] ongoing maintenance on NDArray (#11126)

2018-06-12 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 74479b8  [MXNET-386] ongoing maintenance on NDArray (#11126)
74479b8 is described below

commit 74479b89eaba8241573079aa5e32f0ba0f8dd00e
Author: Lanking 
AuthorDate: Tue Jun 12 17:06:13 2018 -0700

[MXNET-386] ongoing maintenance on NDArray (#11126)

* Important ndarray feature

* merge generic function Invoke

* Pass the Scala Style test

* add Experimental tags

* Change with NDArgs addition

* change dir for Experimental tag

* reTrigger CI

* add Symbol Macros change

* Add some workaround on NDArray

* Simplify the base part

* add changes on ND and Symbols...

* avoid vars

* add Symbol Macros

* Trigger the CI

* Trigger CI
---
 .../src/main/scala/org/apache/mxnet/NDArray.scala  | 17 +---
 .../org/apache/mxnet/annotation/Experimental.scala | 25 
 .../scala/org/apache/mxnet/APIDocGenerator.scala   |  7 +++-
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 47 +-
 .../main/scala/org/apache/mxnet/SymbolMacro.scala  | 24 ---
 5 files changed, 87 insertions(+), 33 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index 469107a..49f4d35 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -65,12 +65,12 @@ object NDArray {
 val ndArgs = ArrayBuffer.empty[NDArray]
 val posArgs = ArrayBuffer.empty[String]
 args.foreach {
-  case arr: NDArray =>
-ndArgs.append(arr)
-  case arrFunRet: NDArrayFuncReturn =>
-arrFunRet.arr.foreach(ndArgs.append(_))
-  case arg =>
-posArgs.append(arg.toString)
+case arr: NDArray =>
+  ndArgs.append(arr)
+case arrFunRet: NDArrayFuncReturn =>
+  arrFunRet.arr.foreach(ndArgs.append(_))
+case arg =>
+  posArgs.append(arg.toString)
 }
 
 require(posArgs.length <= function.arguments.length,
@@ -81,6 +81,7 @@ object NDArray {
 ++ function.arguments.slice(0, posArgs.length).zip(posArgs) - "out"
   ).map { case (k, v) => k -> v.toString }
 
+
 val (oriOutputs, outputVars) =
   if (kwargs != null && kwargs.contains("out")) {
 val output = kwargs("out")
@@ -537,6 +538,10 @@ object NDArray {
 new NDArray(handleRef.value)
   }
 
+  private def _crop_assign(kwargs: Map[String, Any] = null)(args: Any*) : 
NDArrayFuncReturn = {
+genericNDArrayFunctionInvoke("_crop_assign", args, kwargs)
+  }
+
   // TODO: imdecode
 }
 
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
 
b/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
new file mode 100644
index 000..33d1d33
--- /dev/null
+++ 
b/scala-package/core/src/main/scala/org/apache/mxnet/annotation/Experimental.scala
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.mxnet.annotation
+
+import java.lang.annotation.{ElementType, Retention, Target, _}
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(Array(ElementType.TYPE, ElementType.FIELD, ElementType.METHOD, 
ElementType.PARAMETER,
+  ElementType.CONSTRUCTOR, ElementType.LOCAL_VARIABLE, ElementType.PACKAGE))
+class Experimental {}
diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
index 90fe260..3bbc7fd 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala
@@ -52,8 +52,9 @@ private[mxnet] object APIDocGenerator{
 val apacheLicence = &q

[incubator-mxnet] branch master updated: [MXNET-62] add test against spark integration (#10462)

2018-06-11 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ed80ff2  [MXNET-62] add test against spark integration (#10462)
ed80ff2 is described below

commit ed80ff2c01ff54e82215bf03e8df942ea729a15e
Author: Nan Zhu 
AuthorDate: Mon Jun 11 18:22:59 2018 -0700

[MXNET-62] add test against spark integration (#10462)

* fix bug

* temp

* temp

* temp

* update

* update

* update

* remove debugging stubs

* remove unused

* stylistic fix

* fix typo

* Pulled down update to submodule_dir

* add test

* retrigger it

* sync 3rd party
---
 3rdparty/ps-lite   |   2 +-
 include/mxnet/kvstore.h|   1 +
 .../scala/org/apache/mxnet/optimizer/SGD.scala |   7 +-
 scala-package/pom.xml  |   2 +-
 scala-package/spark/bin/run-mnist-example.sh   |   9 +-
 scala-package/spark/pom.xml|  39 +-
 .../main/scala/org/apache/mxnet/spark/MXNet.scala  |   7 +-
 .../scala/org/apache/mxnet/spark/MXNetParams.scala |   6 +-
 .../org/apache/mxnet/spark/ParameterServer.scala   |   6 +-
 .../spark/example/ClassificationExample.scala  |   1 +
 .../org/apache/mxnet/spark/MXNetGeneralSuite.scala |  69 ++
 .../apache/mxnet/spark/SharedSparkContext.scala| 146 +
 src/kvstore/kvstore_dist.h |   3 +-
 src/kvstore/kvstore_dist_server.h  |   3 +-
 14 files changed, 282 insertions(+), 19 deletions(-)

diff --git a/3rdparty/ps-lite b/3rdparty/ps-lite
index a6dda54..8a76389 16
--- a/3rdparty/ps-lite
+++ b/3rdparty/ps-lite
@@ -1 +1 @@
-Subproject commit a6dda54604a07d1fb21b016ed1e3f4246b08222a
+Subproject commit 8a763892a973afc1acd3d4b469d05bb338a83a6e
diff --git a/include/mxnet/kvstore.h b/include/mxnet/kvstore.h
index 4e99a9c..9e92207 100644
--- a/include/mxnet/kvstore.h
+++ b/include/mxnet/kvstore.h
@@ -229,6 +229,7 @@ class KVStore {
 CHECK(updater) << "invalid updater";
 updater_ = updater;
   }
+
   /*!
* \brief set an updater with string keys
*
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/optimizer/SGD.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/optimizer/SGD.scala
index c1b7259..e228e72 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/optimizer/SGD.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/optimizer/SGD.scala
@@ -41,14 +41,15 @@ class SGD(val learningRate: Float = 0.01f, momentum: Float 
= 0.0f,
*/
   override def update(index: Int, weight: NDArray, grad: NDArray, state: 
AnyRef): Unit = {
 // TODO(bing) implement wd_bias, wd_gamma, wd_beta (copy from python 
package)
-var lr =
-  (if (lrScheduler != null) {
+var lr = {
+  if (lrScheduler != null) {
 val scheduledLr = lrScheduler(numUpdate)
 updateCount(index)
 scheduledLr
   } else {
 this.learningRate
-  })
+  }
+}
 lr = getLr(index, lr)
 
 val wd = getWd(index, this.wd)
diff --git a/scala-package/pom.xml b/scala-package/pom.xml
index 9dcfa7c..cd5dba8 100644
--- a/scala-package/pom.xml
+++ b/scala-package/pom.xml
@@ -242,7 +242,7 @@
   
 org.apache.maven.plugins
 maven-surefire-plugin
-2.7
+2.19
 
   true
 
diff --git a/scala-package/spark/bin/run-mnist-example.sh 
b/scala-package/spark/bin/run-mnist-example.sh
index 962c337..392d6c6 100755
--- a/scala-package/spark/bin/run-mnist-example.sh
+++ b/scala-package/spark/bin/run-mnist-example.sh
@@ -17,6 +17,8 @@
 # specific language governing permissions and limitations
 # under the License.
 
+set -x
+
 CURR_DIR=$(cd `dirname $0`; pwd)
 SPARK_MODULE_DIR=$(cd $CURR_DIR/../; pwd)
 SCALA_PKG_DIR=$(cd $CURR_DIR/../../; pwd)
@@ -35,10 +37,7 @@ SPARK_JAR=`find ${SPARK_MODULE_DIR}/target -name "*.jar" 
-type f -exec ls "{}" +
 SCALA_JAR=`find ${SCALA_PKG_DIR}/assembly/$OS/target -maxdepth 1 -name "*.jar" 
-type f -exec ls "{}" + | grep -v -E '(javadoc|sources)'`
 
 SPARK_OPTS+=" --name mxnet-spark-mnist"
-SPARK_OPTS+=" --driver-memory 1g"
-SPARK_OPTS+=" --executor-memory 1g"
-SPARK_OPTS+=" --num-executors 2"
-SPARK_OPTS+=" --executor-cores 1"
+SPARK_OPTS+=" --driver-memory 2g"
 SPARK_OPTS+=" --jars ${SCALA_JAR}"
 
 # Download training and test set
@@ -72,7 +71,7 @@ fi
 
 HOST=`hostname`
 
-$SPARK_HOME/bin/spark-submit --master spark://$HOST:7077 \
+$SPARK_HOME/bin/spark-submit --master local[*] \
   --class org.apache.mxnet.spark.example.ClassificationExample \
   ${SPARK_

[incubator-mxnet] branch v1.2.0-java updated: improve NDArrayIter to have Builder and ability to specifying names

2018-06-08 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.2.0-java by this push:
 new 4177175  improve NDArrayIter to have Builder and ability to specifying 
names
4177175 is described below

commit 41771756420f0bed0edd8c1369f6010d73ebfa26
Author: Yizhi Liu 
AuthorDate: Fri Jun 8 17:39:31 2018 -0700

improve NDArrayIter to have Builder and ability to specifying names
---
 .../scala/org/apache/mxnet/io/NDArrayIter.scala| 90 +++---
 .../src/test/scala/org/apache/mxnet/IOSuite.scala  |  8 +-
 2 files changed, 70 insertions(+), 28 deletions(-)

diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
index 5108938..ed3c5ad 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/io/NDArrayIter.scala
@@ -23,6 +23,7 @@ import org.apache.mxnet.Base._
 import org.apache.mxnet._
 import org.slf4j.LoggerFactory
 
+import scala.annotation.varargs
 import scala.collection.immutable.ListMap
 
 /**
@@ -38,15 +39,23 @@ import scala.collection.immutable.ListMap
  * the size of data does not match batch_size. Roll over is intended
  * for training and can cause problems if used for prediction.
  */
-class NDArrayIter (data: IndexedSeq[NDArray], label: IndexedSeq[NDArray] = 
IndexedSeq.empty,
-  private val dataBatchSize: Int = 1, shuffle: Boolean = false,
-  lastBatchHandle: String = "pad",
-  dataName: String = "data", labelName: String = "label") 
extends DataIter {
-  private val logger = LoggerFactory.getLogger(classOf[NDArrayIter])
+class NDArrayIter(data: IndexedSeq[(String, NDArray)],
+  label: IndexedSeq[(String, NDArray)],
+  private val dataBatchSize: Int, shuffle: Boolean,
+  lastBatchHandle: String) extends DataIter {
+
+  def this(data: IndexedSeq[NDArray], label: IndexedSeq[NDArray] = 
IndexedSeq.empty,
+   dataBatchSize: Int = 1, shuffle: Boolean = false,
+   lastBatchHandle: String = "pad",
+   dataName: String = "data", labelName: String = "label") {
+this(IO.initData(data, allowEmpty = false, dataName),
+  IO.initData(label, allowEmpty = true, labelName),
+  dataBatchSize, shuffle, lastBatchHandle)
+  }
 
+  private val logger = LoggerFactory.getLogger(classOf[NDArrayIter])
 
-  private val (_dataList: IndexedSeq[NDArray],
-  _labelList: IndexedSeq[NDArray]) = {
+  val (initData: IndexedSeq[(String, NDArray)], initLabel: IndexedSeq[(String, 
NDArray)]) = {
 // data should not be null and size > 0
 require(data != null && data.size > 0,
   "data should not be null and data.size should not be zero")
@@ -55,17 +64,17 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
   "label should not be null. Use IndexedSeq.empty if there are no labels")
 
 // shuffle is not supported currently
-require(shuffle == false, "shuffle is not supported currently")
+require(!shuffle, "shuffle is not supported currently")
 
 // discard final part if lastBatchHandle equals discard
 if (lastBatchHandle.equals("discard")) {
-  val dataSize = data(0).shape(0)
+  val dataSize = data(0)._2.shape(0)
   require(dataBatchSize <= dataSize,
 "batch_size need to be smaller than data size when not padding.")
   val keepSize = dataSize - dataSize % dataBatchSize
-  val dataList = data.map(ndArray => {ndArray.slice(0, keepSize)})
+  val dataList = data.map { case (name, ndArray) => (name, 
ndArray.slice(0, keepSize)) }
   if (!label.isEmpty) {
-val labelList = label.map(ndArray => {ndArray.slice(0, keepSize)})
+val labelList = label.map { case (name, ndArray) => (name, 
ndArray.slice(0, keepSize)) }
 (dataList, labelList)
   } else {
 (dataList, label)
@@ -75,13 +84,9 @@ class NDArrayIter (data: IndexedSeq[NDArray], label: 
IndexedSeq[NDArray] = Index
 }
   }
 
-
-  val initData: IndexedSeq[(String, NDArray)] = IO.initData(_dataList, false, 
dataName)
-  val initLabel: IndexedSeq[(String, NDArray)] = IO.initData(_labelList, true, 
labelName)
-  val numData = _dataList(0).shape(0)
-  val numSource = initData.size
-  var cursor = -dataBatchSize
-
+  val numData = initData(0)._2.shape(0)
+  val numSource: MXUint = initData.size
+  private var cursor = -dataBatchSize
 
   private val (_provideData: ListMap[String, Shape],
_provideLabel: ListMap[String, Shape]) = {
@@ -112,8 +117,8 @@ class NDArrayIter (data: Index

[incubator-mxnet] branch master updated: [MXNET-471] Add Builder class for Scala Module and DataBatch to simplify construction (#11045)

2018-05-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 5153370  [MXNET-471] Add Builder class for Scala Module and DataBatch 
to simplify construction (#11045)
5153370 is described below

commit 5153370e3a3d922bbccdf3d28b5c6f31995722fe
Author: Yizhi Liu <liuyi...@apache.org>
AuthorDate: Sun May 27 14:07:31 2018 -0700

[MXNET-471] Add Builder class for Scala Module and DataBatch to simplify 
construction (#11045)

* Add Builder class for Module and DataBatch to simplify construction. Add 
annotation to enable varargs in Java

* change provideData & provideLabel to more proper names. add test cases

* lint code

* add comments for type-safe

* fix test for DataBatch

* remove varargs

* check data != null in DataBatch.Builder
---
 .../core/src/main/scala/org/apache/mxnet/IO.scala  | 105 -
 .../src/main/scala/org/apache/mxnet/Shape.scala|   4 +
 .../src/main/scala/org/apache/mxnet/Symbol.scala   |   1 -
 .../scala/org/apache/mxnet/module/BaseModule.scala |  30 ++
 .../scala/org/apache/mxnet/module/Module.scala |  73 +-
 .../test/scala/org/apache/mxnet/ModuleSuite.scala  |  28 +++---
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala |   3 +
 7 files changed, 230 insertions(+), 14 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
index 7a9c1a7..d9c767c 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
@@ -19,9 +19,10 @@ package org.apache.mxnet
 
 import org.apache.mxnet.Base._
 import org.apache.mxnet.DType.DType
-import org.apache.mxnet.io.{MXDataPack, MXDataIter}
+import org.apache.mxnet.io.{MXDataIter, MXDataPack}
 import org.slf4j.LoggerFactory
 
+import scala.annotation.varargs
 import scala.collection.immutable.ListMap
 import scala.collection.mutable.ListBuffer
 
@@ -160,6 +161,108 @@ class DataBatch(val data: IndexedSeq[NDArray],
   def provideLabel: ListMap[String, Shape] = providedLabel
 }
 
+object DataBatch {
+  /**
+   * Builder class for DataBatch.
+   */
+  class Builder() {
+private var data: IndexedSeq[NDArray] = null
+private var label: IndexedSeq[NDArray] = null
+private var index: IndexedSeq[Long] = null
+private var pad: Int = 0
+private var bucketKey: AnyRef = null
+private var datatShapes: ListMap[String, Shape] = null
+private var labelShapes: ListMap[String, Shape] = null
+
+/**
+ * Set the input data.
+ * @param data a list of data.
+ * @return this.
+ */
+@varargs def setData(data: NDArray*): Builder = {
+  this.data = data.toIndexedSeq
+  this
+}
+
+/**
+ * Set the labels in the same order of data.
+ * @param label a list of labels.
+ * @return this.
+ */
+@varargs def setLabel(label: NDArray*): Builder = {
+  this.label = label.toIndexedSeq
+  this
+}
+
+/**
+ * Set the example indices in this batch.
+ * @param index indices in the same order of data.
+ * @return this.
+ */
+@varargs def setIndex(index: Long*): Builder = {
+  this.index = index.toIndexedSeq
+  this
+}
+
+/**
+ * Set the pad.
+ * @param pad The number of examples padded at the end of a batch. It is 
used when the
+ *total number of examples read is not divisible by the 
`batch_size`.
+ *These extra padded examples are ignored in prediction.
+ * @return this
+ */
+def setPad(pad: Int): Builder = {
+  this.pad = pad
+  this
+}
+
+/**
+ * Set the bucket key, used for bucketing module.
+ * @param bucketKey the bucket key related to this batch.
+ * @return this.
+ */
+def setBucketKey(bucketKey: AnyRef): Builder = {
+  this.bucketKey = bucketKey
+  this
+}
+
+/**
+ * Provide the shape of a data.
+ * @param name data name.
+ * @param shape data shape.
+ * @return this.
+ */
+def provideDataShape(name: String, shape: Shape): Builder = {
+  if (datatShapes == null) {
+datatShapes = ListMap((name, shape))
+  } else {
+datatShapes = datatShapes.updated(name, shape)
+  }
+  this
+}
+
+/**
+ * Provide the shape of a label.
+ * @param name label name.
+ * @param shape label shape.
+ * @return this.
+ */
+def provideLabelShape(name: String, shape: Shape): Builder = {
+  if (labelShapes == null) {
+labelShapes = ListMap((name, shape))
+  } else {
+labelShapes = labelShapes.updated(name, shape)
+  }
+  this
+}
+
+def build(): DataBatch = {
+  require(data != n

[incubator-mxnet] branch master updated: [MXNET-357] New Scala API Design (NDArray) (#10787)

2018-05-23 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new b0d632f  [MXNET-357] New Scala API Design (NDArray) (#10787)
b0d632f is described below

commit b0d632f7ed9d59508e01e391fbe111ec5d1d2edd
Author: Lanking <lanking...@live.com>
AuthorDate: Wed May 23 11:43:20 2018 -0700

[MXNET-357] New Scala API Design (NDArray) (#10787)

* Add new NDArray APIs

* Add NDArray APIs

* change the impl into individual functions and add comments

* Quick fix on redudant code

* Change in Sync
---
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |   2 +
 .../main/scala/org/apache/mxnet/NDArrayAPI.scala   |  24 +++
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 195 +
 3 files changed, 189 insertions(+), 32 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index 416f2d7..469107a 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -37,6 +37,8 @@ object NDArray {
 
   private val functions: Map[String, NDArrayFunction] = initNDArrayModule()
 
+  val api = NDArrayAPI
+
   private def addDependency(froms: Array[NDArray], tos: Array[NDArray]): Unit 
= {
 froms.foreach { from =>
   val weakRef = new WeakReference(from)
diff --git 
a/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayAPI.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayAPI.scala
new file mode 100644
index 000..d234ac6
--- /dev/null
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArrayAPI.scala
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.mxnet
+@AddNDArrayAPIs(false)
+/**
+  * typesafe NDArray API: NDArray.api._
+  * Main code will be generated during compile time through Macros
+  */
+object NDArrayAPI {
+}
diff --git 
a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala 
b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
index 036b9ec..bbe786f 100644
--- a/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
+++ b/scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala
@@ -29,18 +29,26 @@ private[mxnet] class AddNDArrayFunctions(isContrib: 
Boolean) extends StaticAnnot
   private[mxnet] def macroTransform(annottees: Any*) = macro 
NDArrayMacro.addDefs
 }
 
+private[mxnet] class AddNDArrayAPIs(isContrib: Boolean) extends 
StaticAnnotation {
+  private[mxnet] def macroTransform(annottees: Any*) = macro 
NDArrayMacro.typeSafeAPIDefs
+}
+
 private[mxnet] object NDArrayMacro {
-  case class NDArrayFunction(handle: NDArrayHandle)
+  case class NDArrayArg(argName: String, argType: String, isOptional : Boolean)
+  case class NDArrayFunction(name: String, listOfArgs: List[NDArrayArg])
 
   // scalastyle:off havetype
   def addDefs(c: blackbox.Context)(annottees: c.Expr[Any]*) = {
-impl(c)(false, annottees: _*)
+impl(c)(annottees: _*)
+  }
+  def typeSafeAPIDefs(c: blackbox.Context)(annottees: c.Expr[Any]*) = {
+typeSafeAPIImpl(c)(annottees: _*)
   }
   // scalastyle:off havetype
 
-  private val ndarrayFunctions: Map[String, NDArrayFunction] = 
initNDArrayModule()
+  private val ndarrayFunctions: List[NDArrayFunction] = initNDArrayModule()
 
-  private def impl(c: blackbox.Context)(addSuper: Boolean, annottees: 
c.Expr[Any]*): c.Expr[Any] = {
+  private def impl(c: blackbox.Context)(annottees: c.Expr[Any]*): c.Expr[Any] 
= {
 import c.universe._
 
 val isContrib: Boolean = c.prefix.tree match {
@@ -48,40 +56,99 @@ private[mxnet] object NDArrayMacro {
 }
 
 val newNDArrayFunctions = {
-  if (isContrib) ndarrayFunctions.filter(_._1.startsWith("_contrib_"))
-  else ndarrayFunctions.filter(!_._1.startsWith("_contrib_"))
+  if (isContrib) ndarrayFunctions.filter(_.name.startsWith(

[incubator-mxnet] 01/01: add Builder and varargs to be java-friendly

2018-05-23 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 9a3cccf92a634395364d775af95a42003548f971
Author: Yizhi Liu <yizhi...@amazon.com>
AuthorDate: Mon May 21 13:48:31 2018 -0700

add Builder and varargs to be java-friendly
---
 .../core/src/main/scala/org/apache/mxnet/IO.scala  | 56 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |  3 +-
 .../src/main/scala/org/apache/mxnet/Shape.scala|  4 ++
 .../src/main/scala/org/apache/mxnet/Symbol.scala   |  1 -
 .../scala/org/apache/mxnet/module/BaseModule.scala | 30 
 .../scala/org/apache/mxnet/module/Module.scala | 43 -
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 52 ++--
 7 files changed, 138 insertions(+), 51 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
index 7a9c1a7..123e2f8 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
@@ -19,9 +19,10 @@ package org.apache.mxnet
 
 import org.apache.mxnet.Base._
 import org.apache.mxnet.DType.DType
-import org.apache.mxnet.io.{MXDataPack, MXDataIter}
+import org.apache.mxnet.io.{MXDataIter, MXDataPack}
 import org.slf4j.LoggerFactory
 
+import scala.annotation.varargs
 import scala.collection.immutable.ListMap
 import scala.collection.mutable.ListBuffer
 
@@ -140,6 +141,7 @@ class DataBatch(val data: IndexedSeq[NDArray],
 // (must match the order of input data/label)
 private val providedData: ListMap[String, Shape] = null,
 private val providedLabel: ListMap[String, Shape] = null) {
+
   /**
* Dispose its data and labels
* The object shall never be used after it is disposed.
@@ -160,6 +162,58 @@ class DataBatch(val data: IndexedSeq[NDArray],
   def provideLabel: ListMap[String, Shape] = providedLabel
 }
 
+object DataBatch {
+  class Builder() {
+private var data: IndexedSeq[NDArray] = null
+private var label: IndexedSeq[NDArray] = null
+private var index: IndexedSeq[Long] = null
+private var pad: Int = 0
+private var bucketKey: AnyRef = null
+private var providedData: ListMap[String, Shape] = ListMap.empty
+private var providedLabel: ListMap[String, Shape] = ListMap.empty
+
+@varargs def setData(data: NDArray*): Builder = {
+  this.data = data.toIndexedSeq
+  this
+}
+
+@varargs def setLabel(label: NDArray*): Builder = {
+  this.label = label.toIndexedSeq
+  this
+}
+
+@varargs def setIndex(index: Long*): Builder = {
+  this.index = index.toIndexedSeq
+  this
+}
+
+def setPad(pad: Int): Builder = {
+  this.pad = pad
+  this
+}
+
+def setBucketKey(bucketKey: AnyRef): Builder = {
+  this.bucketKey = bucketKey
+  this
+}
+
+def provideData(name: String, shape: Shape): Builder = {
+  providedData = providedData.updated(name, shape)
+  this
+}
+
+def provideLabel(name: String, shape: Shape): Builder = {
+  providedLabel = providedLabel.updated(name, shape)
+  this
+}
+
+def build(): DataBatch = {
+  new DataBatch(data, label, index, pad,
+bucketKey, providedData, providedLabel)
+}
+  }
+}
+
 /**
  * DataIter object in mxnet.
  */
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index 416f2d7..e8c687e 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -48,6 +48,7 @@ object NDArray {
 }
   }
 
+  //  private[mxnet] def genericNDArrayFunctionInvoke(
   /**
* Used by NDArrayMacro.
* Invoke this function by passing in parameters.
@@ -57,7 +58,7 @@ object NDArray {
* @param kwargs Key-value arguments of input scalars
* @return The result NDArrays of result of computation.
*/
-  private[mxnet] def genericNDArrayFunctionInvoke(
+  def genericNDArrayFunctionInvoke(
 funcName: String, args: Seq[Any], kwargs: Map[String, Any] = null): 
NDArrayFuncReturn = {
 val function = functions(funcName)
 val ndArgs = ArrayBuffer.empty[NDArray]
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
index e632ade..6891762 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
@@ -17,6 +17,8 @@
 
 package org.apache.mxnet
 
+import scala.annotation.varargs
+
 /**
  * Shape of [[NDArray]] or other data
  */
@@ -28,6 +30,7 @@ class Shape(dims: Traversable[Int]) extends Serializable {
   }
 
   def apply(dim: Int

[incubator-mxnet] branch v1.2.0-java updated (c887376 -> 9a3cccf)

2018-05-23 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard c887376  add Builder and varargs which are java-friendly
 new 9a3cccf  add Builder and varargs to be java-friendly

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c887376)
\
 N -- N -- N   refs/heads/v1.2.0-java (9a3cccf)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[incubator-mxnet] 01/01: add Builder and varargs which are java-friendly

2018-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit c887376f24df1e3ce941f600c69b38026e71771d
Author: Yizhi Liu <yizhi...@amazon.com>
AuthorDate: Mon May 21 13:48:31 2018 -0700

add Builder and varargs which are java-friendly
---
 .../core/src/main/scala/org/apache/mxnet/IO.scala  | 56 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |  3 +-
 .../src/main/scala/org/apache/mxnet/Shape.scala|  4 ++
 .../src/main/scala/org/apache/mxnet/Symbol.scala   |  1 -
 .../scala/org/apache/mxnet/module/BaseModule.scala | 30 
 .../scala/org/apache/mxnet/module/Module.scala | 43 -
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 52 ++--
 7 files changed, 138 insertions(+), 51 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
index 7a9c1a7..123e2f8 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
@@ -19,9 +19,10 @@ package org.apache.mxnet
 
 import org.apache.mxnet.Base._
 import org.apache.mxnet.DType.DType
-import org.apache.mxnet.io.{MXDataPack, MXDataIter}
+import org.apache.mxnet.io.{MXDataIter, MXDataPack}
 import org.slf4j.LoggerFactory
 
+import scala.annotation.varargs
 import scala.collection.immutable.ListMap
 import scala.collection.mutable.ListBuffer
 
@@ -140,6 +141,7 @@ class DataBatch(val data: IndexedSeq[NDArray],
 // (must match the order of input data/label)
 private val providedData: ListMap[String, Shape] = null,
 private val providedLabel: ListMap[String, Shape] = null) {
+
   /**
* Dispose its data and labels
* The object shall never be used after it is disposed.
@@ -160,6 +162,58 @@ class DataBatch(val data: IndexedSeq[NDArray],
   def provideLabel: ListMap[String, Shape] = providedLabel
 }
 
+object DataBatch {
+  class Builder() {
+private var data: IndexedSeq[NDArray] = null
+private var label: IndexedSeq[NDArray] = null
+private var index: IndexedSeq[Long] = null
+private var pad: Int = 0
+private var bucketKey: AnyRef = null
+private var providedData: ListMap[String, Shape] = ListMap.empty
+private var providedLabel: ListMap[String, Shape] = ListMap.empty
+
+@varargs def setData(data: NDArray*): Builder = {
+  this.data = data.toIndexedSeq
+  this
+}
+
+@varargs def setLabel(label: NDArray*): Builder = {
+  this.label = label.toIndexedSeq
+  this
+}
+
+@varargs def setIndex(index: Long*): Builder = {
+  this.index = index.toIndexedSeq
+  this
+}
+
+def setPad(pad: Int): Builder = {
+  this.pad = pad
+  this
+}
+
+def setBucketKey(bucketKey: AnyRef): Builder = {
+  this.bucketKey = bucketKey
+  this
+}
+
+def provideData(name: String, shape: Shape): Builder = {
+  providedData = providedData.updated(name, shape)
+  this
+}
+
+def provideLabel(name: String, shape: Shape): Builder = {
+  providedLabel = providedLabel.updated(name, shape)
+  this
+}
+
+def build(): DataBatch = {
+  new DataBatch(data, label, index, pad,
+bucketKey, providedData, providedLabel)
+}
+  }
+}
+
 /**
  * DataIter object in mxnet.
  */
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index 416f2d7..e8c687e 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -48,6 +48,7 @@ object NDArray {
 }
   }
 
+  //  private[mxnet] def genericNDArrayFunctionInvoke(
   /**
* Used by NDArrayMacro.
* Invoke this function by passing in parameters.
@@ -57,7 +58,7 @@ object NDArray {
* @param kwargs Key-value arguments of input scalars
* @return The result NDArrays of result of computation.
*/
-  private[mxnet] def genericNDArrayFunctionInvoke(
+  def genericNDArrayFunctionInvoke(
 funcName: String, args: Seq[Any], kwargs: Map[String, Any] = null): 
NDArrayFuncReturn = {
 val function = functions(funcName)
 val ndArgs = ArrayBuffer.empty[NDArray]
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
index e632ade..6891762 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
@@ -17,6 +17,8 @@
 
 package org.apache.mxnet
 
+import scala.annotation.varargs
+
 /**
  * Shape of [[NDArray]] or other data
  */
@@ -28,6 +30,7 @@ class Shape(dims: Traversable[Int]) extends Serializable {
   }
 
   def apply(di

[incubator-mxnet] branch v1.2.0-java updated (9a7719e -> c887376)

2018-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard 9a7719e  add Builder and varargs which are easy for java to use
 new c887376  add Builder and varargs which are java-friendly

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (9a7719e)
\
 N -- N -- N   refs/heads/v1.2.0-java (c887376)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


[incubator-mxnet] 01/01: add Builder and varargs which are easy for java to use

2018-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 9a7719e545918e7acbbe23c5684621f7307c3fca
Author: Yizhi Liu <yizhi...@amazon.com>
AuthorDate: Mon May 21 13:48:31 2018 -0700

add Builder and varargs which are easy for java to use
---
 .../core/src/main/scala/org/apache/mxnet/IO.scala  | 56 +-
 .../src/main/scala/org/apache/mxnet/NDArray.scala  |  3 +-
 .../src/main/scala/org/apache/mxnet/Shape.scala|  4 ++
 .../src/main/scala/org/apache/mxnet/Symbol.scala   |  1 -
 .../scala/org/apache/mxnet/module/BaseModule.scala | 30 
 .../scala/org/apache/mxnet/module/Module.scala | 43 -
 .../main/scala/org/apache/mxnet/NDArrayMacro.scala | 52 ++--
 7 files changed, 138 insertions(+), 51 deletions(-)

diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
index 7a9c1a7..123e2f8 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/IO.scala
@@ -19,9 +19,10 @@ package org.apache.mxnet
 
 import org.apache.mxnet.Base._
 import org.apache.mxnet.DType.DType
-import org.apache.mxnet.io.{MXDataPack, MXDataIter}
+import org.apache.mxnet.io.{MXDataIter, MXDataPack}
 import org.slf4j.LoggerFactory
 
+import scala.annotation.varargs
 import scala.collection.immutable.ListMap
 import scala.collection.mutable.ListBuffer
 
@@ -140,6 +141,7 @@ class DataBatch(val data: IndexedSeq[NDArray],
 // (must match the order of input data/label)
 private val providedData: ListMap[String, Shape] = null,
 private val providedLabel: ListMap[String, Shape] = null) {
+
   /**
* Dispose its data and labels
* The object shall never be used after it is disposed.
@@ -160,6 +162,58 @@ class DataBatch(val data: IndexedSeq[NDArray],
   def provideLabel: ListMap[String, Shape] = providedLabel
 }
 
+object DataBatch {
+  class Builder() {
+private var data: IndexedSeq[NDArray] = null
+private var label: IndexedSeq[NDArray] = null
+private var index: IndexedSeq[Long] = null
+private var pad: Int = 0
+private var bucketKey: AnyRef = null
+private var providedData: ListMap[String, Shape] = ListMap.empty
+private var providedLabel: ListMap[String, Shape] = ListMap.empty
+
+@varargs def setData(data: NDArray*): Builder = {
+  this.data = data.toIndexedSeq
+  this
+}
+
+@varargs def setLabel(label: NDArray*): Builder = {
+  this.label = label.toIndexedSeq
+  this
+}
+
+@varargs def setIndex(index: Long*): Builder = {
+  this.index = index.toIndexedSeq
+  this
+}
+
+def setPad(pad: Int): Builder = {
+  this.pad = pad
+  this
+}
+
+def setBucketKey(bucketKey: AnyRef): Builder = {
+  this.bucketKey = bucketKey
+  this
+}
+
+def provideData(name: String, shape: Shape): Builder = {
+  providedData = providedData.updated(name, shape)
+  this
+}
+
+def provideLabel(name: String, shape: Shape): Builder = {
+  providedLabel = providedLabel.updated(name, shape)
+  this
+}
+
+def build(): DataBatch = {
+  new DataBatch(data, label, index, pad,
+bucketKey, providedData, providedLabel)
+}
+  }
+}
+
 /**
  * DataIter object in mxnet.
  */
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
index 416f2d7..e8c687e 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala
@@ -48,6 +48,7 @@ object NDArray {
 }
   }
 
+  //  private[mxnet] def genericNDArrayFunctionInvoke(
   /**
* Used by NDArrayMacro.
* Invoke this function by passing in parameters.
@@ -57,7 +58,7 @@ object NDArray {
* @param kwargs Key-value arguments of input scalars
* @return The result NDArrays of result of computation.
*/
-  private[mxnet] def genericNDArrayFunctionInvoke(
+  def genericNDArrayFunctionInvoke(
 funcName: String, args: Seq[Any], kwargs: Map[String, Any] = null): 
NDArrayFuncReturn = {
 val function = functions(funcName)
 val ndArgs = ArrayBuffer.empty[NDArray]
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala 
b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
index e632ade..6891762 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/Shape.scala
@@ -17,6 +17,8 @@
 
 package org.apache.mxnet
 
+import scala.annotation.varargs
+
 /**
  * Shape of [[NDArray]] or other data
  */
@@ -28,6 +30,7 @@ class Shape(dims: Traversable[Int]) extends Serializable {
   }
 
   def app

[incubator-mxnet] branch v1.2.0-java updated (1e0d064 -> 9a7719e)

2018-05-22 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch v1.2.0-java
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


 discard 1e0d064  add Builder and @varargs which are easy for java to use
 new 9a7719e  add Builder and varargs which are easy for java to use

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (1e0d064)
\
 N -- N -- N   refs/heads/v1.2.0-java (9a7719e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:

-- 
To stop receiving notification emails like this one, please contact
liuyi...@apache.org.


  1   2   >