[GitHub] [incubator-mxnet] vexilligera commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
vexilligera commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338408083
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
+
+Stack 1-D arrays as columns into a 2-D array.
+
+Take a sequence of 1-D arrays and stack them as columns
+to make a single 2-D array. 2-D arrays are stacked as-is,
+just like with `hstack`.  1-D arrays are turned into 2-D columns
+first.
+
+Parameters
+--
+tup : sequence of 1-D or 2-D arrays.
+Arrays to stack. All of them must have the same first dimension.
+
+Returns
+---
+stacked : 2-D array
+The array formed by stacking the given arrays.
+
+See Also
+
+stack, hstack, vstack, concatenate
+
+Examples
+
+>>> a = np.array((1,2,3))
 
 Review comment:
   Changed to float output with trailing decimal point.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vexilligera commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
vexilligera commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338407807
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
+
+Stack 1-D arrays as columns into a 2-D array.
+
+Take a sequence of 1-D arrays and stack them as columns
+to make a single 2-D array. 2-D arrays are stacked as-is,
+just like with `hstack`.  1-D arrays are turned into 2-D columns
+first.
+
+Parameters
+--
+tup : sequence of 1-D or 2-D arrays.
+Arrays to stack. All of them must have the same first dimension.
+
+Returns
+---
+stacked : 2-D array
+The array formed by stacking the given arrays.
+
+See Also
+
+stack, hstack, vstack, concatenate
+
+Examples
+
+>>> a = np.array((1,2,3))
+>>> b = np.array((2,3,4))
+>>> np.column_stack((a,b))
+array([[1, 2],
 
 Review comment:
   I didn't quite get it. Could you elaborate?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vexilligera commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
vexilligera commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338407656
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
 
 Review comment:
   Removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new bf95876  Bump the publish timestamp.
bf95876 is described below

commit bf95876ac94e527ea09d618a0f478e4003f0a253
Author: mxnet-ci 
AuthorDate: Thu Oct 24 06:41:55 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..2e98d1a
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Oct 24 06:41:55 UTC 2019



[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #16589: Fix index overflow bug in einsum

2019-10-23 Thread GitBox
hzfan commented on a change in pull request #16589: Fix index overflow bug in 
einsum
URL: https://github.com/apache/incubator-mxnet/pull/16589#discussion_r338403041
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3377,16 +3377,20 @@ def dbg(name, data):
 
_np.dot(args[0].T, _np.dot(_np.ones((2, 2)), args[2].T)),
 
_np.dot(_np.dot(args[0], args[1]).T, _np.ones((2, 2),
 # broadcast bug
-(('ij, ij -> i'), [(1, 4), (2, 4)], lambda *args: (_np.sum(args[1], 
axis=0)[None, :],
-   _np.tile(args[0], 
[2, 1]))),
+('ij, ij -> i', [(1, 4), (2, 4)], lambda *args: (_np.sum(args[1], 
axis=0)[None, :],
+ _np.tile(args[0], [2, 
1]))),
+# issue #16576
+# commented due to long running time
+# ('abiz,abjz->abij', [(64, 8, 128, 512), (64, 8, 128, 512)], lambda 
*args: (_np.matmul(_np.ones((64, 8, 128, 128)), args[1]),
+#  
  _np.matmul(_np.ones((64, 8, 128, 128)), args[0]))),
 ]
-dtypes = ['int32', 'float16', 'float32', 'float64']
+dtypes = ['int32', 'float32', 'float64']
 for hybridize in [False, True]:
 for dtype in dtypes:
 for config in configs:
 for optimize in [False, True]:
-rtol = 1e-0 if dtype == 'float16' else 1e-3
-atol = 1e-1 if dtype == 'float16' else 1e-5
+rtol = 1e-0 if dtype == 'float16' else 1e-1
+atol = 1e-1 if dtype == 'float16' else 1e-1
 
 Review comment:
   @sxjscience Falling back to Batched GEMM is possible, but quite a lot work 
needs to be done. Now both official numpy and our implementation uses 
`tensordot` as a fallback, but `tensordot` cannot handle batch dot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16602: [MKLDNN]Fix reorder2default

2019-10-23 Thread GitBox
ZhennanQin commented on issue #16602: [MKLDNN]Fix reorder2default
URL: https://github.com/apache/incubator-mxnet/pull/16602#issuecomment-545768824
 
 
   > How about pushing this change directly to mkldnn-v1.0 branch?
   
   Let's run master CI pass, then I will duplicate this to 1.0 branch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (0742a9b -> ca5a2a0)

2019-10-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0742a9b  Large Vector tests for DGL Ops Part 2 (#16497)
 add ca5a2a0  [Numpy] Loading numpy-incompatible NDArray in 
numpy-compatible mode (#16597)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/c_api.h  |  2 +-
 include/mxnet/imperative.h | 10 ++
 .../native/src/main/native/org_apache_mxnet_native_c_api.cc|  4 ++--
 src/c_api/c_api_ndarray.cc |  2 +-
 src/ndarray/ndarray.cc |  3 ++-
 5 files changed, 12 insertions(+), 9 deletions(-)



[GitHub] [incubator-mxnet] ZhennanQin opened a new pull request #16607: Surpress subgraph log in CI

2019-10-23 Thread GitBox
ZhennanQin opened a new pull request #16607: Surpress subgraph log in CI
URL: https://github.com/apache/incubator-mxnet/pull/16607
 
 
`src/executor/graph_executor.cc:2014: Subgraph backend MKLDNN is 
activated.` is flooding CI log. Let's surpress it in CI.
   
   @pengzhao-intel @TaoLv 
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (0742a9b -> ca5a2a0)

2019-10-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 0742a9b  Large Vector tests for DGL Ops Part 2 (#16497)
 add ca5a2a0  [Numpy] Loading numpy-incompatible NDArray in 
numpy-compatible mode (#16597)

No new revisions were added by this update.

Summary of changes:
 include/mxnet/c_api.h  |  2 +-
 include/mxnet/imperative.h | 10 ++
 .../native/src/main/native/org_apache_mxnet_native_c_api.cc|  4 ++--
 src/c_api/c_api_ndarray.cc |  2 +-
 src/ndarray/ndarray.cc |  3 ++-
 5 files changed, 12 insertions(+), 9 deletions(-)



[GitHub] [incubator-mxnet] ZhennanQin opened a new pull request #16606: Fix dequantize memory corruption

2019-10-23 Thread GitBox
ZhennanQin opened a new pull request #16606: Fix dequantize memory corruption
URL: https://github.com/apache/incubator-mxnet/pull/16606
 
 
   dequantize only has 1 output. The access of (*out_attrs)[1] and 
(*out_attrs)[2] is illegal.
   @pengzhao-intel @TaoLv 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 merged pull request #16597: [Numpy] Loading numpy-incompatible NDArray in numpy-compatible mode

2019-10-23 Thread GitBox
haojin2 merged pull request #16597: [Numpy] Loading numpy-incompatible NDArray 
in numpy-compatible mode
URL: https://github.com/apache/incubator-mxnet/pull/16597
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338397376
 
 

 ##
 File path: python/mxnet/numpy_dispatch_protocol.py
 ##
 @@ -119,6 +119,7 @@ def _run_with_array_ufunc_proto(*args, **kwargs):
 'var',
 'vdot',
 'vstack',
+# 'column_stack',
 
 Review comment:
   Please keep this and add some test cases not necessarily from NumPy in 
`test_numpy_interoperability.py`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338397125
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
+
+Stack 1-D arrays as columns into a 2-D array.
+
+Take a sequence of 1-D arrays and stack them as columns
+to make a single 2-D array. 2-D arrays are stacked as-is,
+just like with `hstack`.  1-D arrays are turned into 2-D columns
+first.
+
+Parameters
+--
+tup : sequence of 1-D or 2-D arrays.
+Arrays to stack. All of them must have the same first dimension.
+
+Returns
+---
+stacked : 2-D array
+The array formed by stacking the given arrays.
+
+See Also
+
+stack, hstack, vstack, concatenate
+
+Examples
+
+>>> a = np.array((1,2,3))
 
 Review comment:
   Did you run this using MXNet? I believe MXNet would give a slightly 
different output than the following.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338396879
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
+
+Stack 1-D arrays as columns into a 2-D array.
+
+Take a sequence of 1-D arrays and stack them as columns
+to make a single 2-D array. 2-D arrays are stacked as-is,
+just like with `hstack`.  1-D arrays are turned into 2-D columns
+first.
+
+Parameters
+--
+tup : sequence of 1-D or 2-D arrays.
+Arrays to stack. All of them must have the same first dimension.
+
+Returns
+---
+stacked : 2-D array
+The array formed by stacking the given arrays.
+
+See Also
+
+stack, hstack, vstack, concatenate
+
+Examples
+
+>>> a = np.array((1,2,3))
+>>> b = np.array((2,3,4))
+>>> np.column_stack((a,b))
+array([[1, 2],
 
 Review comment:
   Notice the alignment
   ```python
   array([[1, 2],
  [2, 3],
  [3, 4]])
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338396521
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -6419,3 +6419,39 @@ def einsum(*operands, **kwargs):
 ... np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=True)
 """
 return _mx_nd_np.einsum(*operands, **kwargs)
+
+@set_module('mxnet.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
 
 Review comment:
   No need to define the signature in docstring.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv commented on issue #16602: [MKLDNN]Fix reorder2default

2019-10-23 Thread GitBox
TaoLv commented on issue #16602: [MKLDNN]Fix reorder2default
URL: https://github.com/apache/incubator-mxnet/pull/16602#issuecomment-545761914
 
 
   How about pushing this change directly to mkldnn-v1.0 branch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16598: second round of fixing broken links in multiple files

2019-10-23 Thread GitBox
aaronmarkham commented on issue #16598: second round of fixing broken links in 
multiple files
URL: https://github.com/apache/incubator-mxnet/pull/16598#issuecomment-545759333
 
 
   Restarted the gpu tests for a second time.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16524: CI tests timing out

2019-10-23 Thread GitBox
aaronmarkham commented on issue #16524: CI tests timing out
URL: 
https://github.com/apache/incubator-mxnet/issues/16524#issuecomment-545759173
 
 
   Timeout on GPU: CMake TVM_OP OFF 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16598/2/pipeline/54
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vexilligera commented on issue #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
vexilligera commented on issue #16594: [Numpy] implement np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#issuecomment-545758159
 
 
   @haojin2 @reminisce 
   Hi, all fixed. Please have a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei opened a new pull request #16605: Fix rnn dropout

2019-10-23 Thread GitBox
roywei opened a new pull request #16605: Fix rnn dropout
URL: https://github.com/apache/incubator-mxnet/pull/16605
 
 
   fix https://github.com/apache/incubator-mxnet/issues/16604
   
   depend on https://github.com/apache/incubator-mxnet/pull/16532, created a 
separate PR in case my fix on rnn has problem. I don't want to impact the merge 
of https://github.com/apache/incubator-mxnet/pull/16532


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #16604: RNN op with dropout cannot use fixed seed on CPU

2019-10-23 Thread GitBox
roywei commented on issue #16604: RNN op with dropout cannot use fixed seed on 
CPU
URL: 
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-545747855
 
 
   This implementation on lstm, gru and rnn should respect mxnet seed:
   
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/rnn_impl.h#L159


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338382229
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
+((0), (0), (0)),
+((2, 2), (2, 1), (2, 3)),
+((4, 3), (4, 4), (4, 1)),
+((2, 2, 2), (2, 4, 2), (2, 2, 2)),
+((0, 1, 1), (0, 1, 1), (0, 1, 1)),
+((2, 1), (2, 2), (2, 2))
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for config in configs:
 
 Review comment:
   Please use `itertools.product` to reduce the nested loops to one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on a change in pull request #16532: fix dropout gpu seed

2019-10-23 Thread GitBox
roywei commented on a change in pull request #16532: fix dropout gpu seed
URL: https://github.com/apache/incubator-mxnet/pull/16532#discussion_r338382491
 
 

 ##
 File path: tests/nightly/test_dropout.py
 ##
 @@ -0,0 +1,49 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import sys
+import os
+curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'python', 
'unittest'))
+from common import with_seed
+import unittest
+
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import assert_almost_equal
+from nose.tools import assert_raises
+
+@with_seed()
 
 Review comment:
   Added, but cpu does not work, gpu works after my PR. I think it's unrelated 
to my change, created a separate issue here: 
https://github.com/apache/incubator-mxnet/issues/16604
   
   Workaround is manual append a dropout layer after rnn on cpu.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
reminisce commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338382025
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -4761,3 +4760,39 @@ def einsum(*operands, **kwargs):
 subscripts = operands[0]
 operands = operands[1:]
 return _npi.einsum(*operands, subscripts=subscripts, out=out, 
optimize=int(optimize_arg))
+
+@set_module('mxnet.ndarray.numpy')
+def column_stack(tup):
+""" column_stack(*args, **kwargs)
+
+Stack 1-D arrays as columns into a 2-D array.
+
+Take a sequence of 1-D arrays and stack them as columns
+to make a single 2-D array. 2-D arrays are stacked as-is,
+just like with `hstack`.  1-D arrays are turned into 2-D columns
+first.
+
+Parameters
+--
+tup : sequence of 1-D or 2-D arrays.
+Arrays to stack. All of them must have the same first dimension.
+
+Returns
+---
+stacked : 2-D array
+The array formed by stacking the given arrays.
+
+See Also
+
+stack, hstack, vstack, concatenate
+
+Examples
+
+>>> a = np.array((1,2,3))
+>>> b = np.array((2,3,4))
+>>> np.column_stack((a,b))
+array([[1, 2],
+[2, 3],
 
 Review comment:
   alignment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei opened a new issue #16604: RNN op with dropout cannot use fixed seed on CPU

2019-10-23 Thread GitBox
roywei opened a new issue #16604: RNN op with dropout cannot use fixed seed on 
CPU
URL: https://github.com/apache/incubator-mxnet/issues/16604
 
 
   As follow up on https://github.com/apache/incubator-mxnet/pull/16532
   
   I found out rnn operator with dropout result cannot be fixed with fixed seed 
on CPU.
   GPU (cudnn) is fine after PR 
https://github.com/apache/incubator-mxnet/pull/16532
   
   
   ```
   @with_seed()
   def test_rnn_with_seed():
   info = np.iinfo(np.int32)
   seed = np.random.randint(info.min, info.max)
   
   _test_rnn(seed, mx.cpu())
   _test_rnn(seed, mx.gpu())
   
   def _test_rnn(seed, ctx):
   data = mx.nd.ones((5, 3, 10), ctx=ctx)
   rnn = mx.gluon.rnn.RNN(100, 3, dropout=0.5)
   rnn.initialize(ctx=ctx)
   mx.random.seed(seed)
   with mx.autograd.record():
   result1 = rnn(data)
   
   mx.random.seed(seed)
   with mx.autograd.record():
   result2 = rnn(data)
   # dropout on gpu should return same result with fixed seed
   assert_almost_equal(result1.asnumpy(), result2.asnumpy())
   
   ```
   
   Current workaround is DO NOT use dropout in rnn and manually append a 
dropout layer after rnn, the following works:
   ```
   def _test_rnn(seed, ctx):
   data = mx.nd.ones((5, 3, 10), ctx=ctx)
   rnn = mx.gluon.rnn.RNN(100, 3, dropout=0.)
   rnn.initialize(ctx=ctx)
   dropout = mx.gluon.nn.Dropout(0.5)
   with mx.autograd.record():
   result1 = rnn(data)
   mx.random.seed(seed)
   o1 = dropout(result1)
   
   with mx.autograd.record():
   result2 = rnn(data)
   mx.random.seed(seed)
   o2 = dropout(result2)
   # dropout on gpu should return same result with fixed seed
   assert_almost_equal(o1.asnumpy(), o2.asnumpy())
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16431: [RFC] MXNet Multithreaded Inference Interface

2019-10-23 Thread GitBox
anirudh2290 commented on issue #16431: [RFC] MXNet Multithreaded Inference 
Interface
URL: 
https://github.com/apache/incubator-mxnet/issues/16431#issuecomment-545741769
 
 
   @ptrendx I am trying to open a PR by Friday. On the status : the two prereqs 
issues https://github.com/dmlc/dmlc-core/pull/573 and 
https://github.com/apache/incubator-mxnet/issues/16434 have been better 
understood and fixed/worked around. I have made C API and backend changes and 
currently still testing it. 
   
   Because of time and resource constraints I won't be able to add the CPP 
frontend changes (which has been mentioned in this PR as targeted for 1.6) in 
this proposal but only C API changes, backend changes and tests/verification.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16603: Significant slowdown in some DGL models

2019-10-23 Thread GitBox
anirudh2290 commented on issue #16603: Significant slowdown in some DGL models
URL: 
https://github.com/apache/incubator-mxnet/issues/16603#issuecomment-545740291
 
 
   which commit on 1.6 ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zheng-da commented on issue #16603: Significant slowdown in some DGL models

2019-10-23 Thread GitBox
zheng-da commented on issue #16603: Significant slowdown in some DGL models
URL: 
https://github.com/apache/incubator-mxnet/issues/16603#issuecomment-54571
 
 
   Here is the profiling result on MXNet 1.5
   https://user-images.githubusercontent.com/70481/67448778-a81d0e00-f64a-11e9-8fe5-7ae37af3905f.png";>
   
   Here is the profiling result in MXNet 1.6
   https://user-images.githubusercontent.com/70481/67448809-c2ef8280-f64a-11e9-9c78-c759028ec7ba.png";>
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zheng-da opened a new issue #16603: Significant slowdown in some DGL models

2019-10-23 Thread GitBox
zheng-da opened a new issue #16603: Significant slowdown in some DGL models
URL: https://github.com/apache/incubator-mxnet/issues/16603
 
 
   I recently compare the performance of DGL KGE models on MXNet 1.5 and the 
current master branch. I noticed significant slowdown. On MXNet 1.5, it takes 
12 seconds to run 1000 batches, and now takes 20 seconds. It seems there is no 
slowdown on operators after some profiling.
   
   To reproduce the problem, please install DGL 0.4, download the DGL KGE 
package by cloning [the DGL repo](https://github.com/dmlc/dgl). The DGL KGE 
package is under apps/kg. Run the following command:
   ```bash
   DGLBACKEND=mxnet python3 train.py --model DistMult --dataset FB15k 
--batch_size 1024 \
   --neg_sample_size 256 --hidden_dim 2000 --gamma 500.0 --lr 0.1 
--max_step 10 \
   --batch_size_eval 16 --gpu 0 --valid --test -adv
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin opened a new pull request #16602: [MKLDNN]Fix reorder2default

2019-10-23 Thread GitBox
ZhennanQin opened a new pull request #16602: [MKLDNN]Fix reorder2default
URL: https://github.com/apache/incubator-mxnet/pull/16602
 
 
   convolution and fc weights may get updated during inference, thus we need to 
handle reorder2default within engine to ensure no write access during this.
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (c3395ca -> 0742a9b)

2019-10-23 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c3395ca  [Numpy] Support N_D(N>=3) batch_dot (#16586)
 add 0742a9b  Large Vector tests for DGL Ops Part 2 (#16497)

No new revisions were added by this update.

Summary of changes:
 tests/nightly/test_large_array.py  |  4 +-
 tests/nightly/test_large_vector.py | 85 --
 tests/python/unittest/test_operator.py |  1 +
 3 files changed, 85 insertions(+), 5 deletions(-)



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
access2rohit commented on a change in pull request #16585: C Api for 
simplebind, fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338353987
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -1295,17 +1295,17 @@ def check_trunc():
 
 
 def create_input_for_trigonometric_ops(vals):
-# Creates large vector input of size(LARGE_X*10, SMALL_Y/10) from vals 
using tile operator
+# Creates large vector input of size(LARGE_X*10, SMALL_Y/10) from vals 
using broadcast_to operator
 
 Review comment:
   @anirudhacharya 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (c3395ca -> 0742a9b)

2019-10-23 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository.

anirudh2290 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from c3395ca  [Numpy] Support N_D(N>=3) batch_dot (#16586)
 add 0742a9b  Large Vector tests for DGL Ops Part 2 (#16497)

No new revisions were added by this update.

Summary of changes:
 tests/nightly/test_large_array.py  |  4 +-
 tests/nightly/test_large_vector.py | 85 --
 tests/python/unittest/test_operator.py |  1 +
 3 files changed, 85 insertions(+), 5 deletions(-)



[GitHub] [incubator-mxnet] anirudh2290 merged pull request #16497: Large Vector tests for DGL Ops Part 2

2019-10-23 Thread GitBox
anirudh2290 merged pull request #16497: Large Vector tests for DGL Ops Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] CoinCheung opened a new issue #16601: Where is this op.h file ?

2019-10-23 Thread GitBox
CoinCheung opened a new issue #16601: Where is this op.h file ?
URL: https://github.com/apache/incubator-mxnet/issues/16601
 
 
   ## Description
   I did not find the `op.h` file in the directory. 
   
   ### Error Message
   The make command give the error of: 
   
/root/build/incubator-mxnet/cpp-package/include/mxnet-cpp/optimizer.hpp:37:26: 
fatal error: mxnet-cpp/op.h: No such file or directory
   compilation terminated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vexilligera commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
vexilligera commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338351249
 
 

 ##
 File path: python/mxnet/numpy_dispatch_protocol.py
 ##
 @@ -119,6 +119,7 @@ def _run_with_array_ufunc_proto(*args, **kwargs):
 'var',
 'vdot',
 'vstack',
+# 'column_stack',
 
 Review comment:
   To pass numpy interoperability test temporarily because there's no test now 
and it will throw error otherwise. I'll add a test later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] CoinCheung commented on issue #16587: How could I set the location of openblas/lapack when I compile mxnet from source?

2019-10-23 Thread GitBox
CoinCheung commented on issue #16587: How could I set the location of 
openblas/lapack when I compile mxnet from source?
URL: 
https://github.com/apache/incubator-mxnet/issues/16587#issuecomment-545705188
 
 
   For lapack, I modified the path at this line, which does not make sense
   
https://github.com/apache/incubator-mxnet/blob/c3395ca60b20f4388dd76746696497148b82fc80/make/config.mk#L134
   
   For openblas, I have to change the folder name from /opt/openblas to 
/opt/OpenBLAS to make it work. 
   
   Would you please show me how I could specify the locations explicitly ?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch mkldnn-v1.0 updated (ca240b2 -> 2210b21)

2019-10-23 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ca240b2  Merge remote-tracking branch 'origin/master' into mkldnn-v1.0
 add 2210b21  [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu 
(#16545)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_operator.py | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)



[incubator-mxnet] branch mkldnn-v1.0 updated (ca240b2 -> 2210b21)

2019-10-23 Thread taolv
This is an automated email from the ASF dual-hosted git repository.

taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from ca240b2  Merge remote-tracking branch 'origin/master' into mkldnn-v1.0
 add 2210b21  [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu 
(#16545)

No new revisions were added by this update.

Summary of changes:
 tests/python/unittest/test_operator.py | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)



[GitHub] [incubator-mxnet] TaoLv merged pull request #16545: [mkldnn-v1.0] Skip flaky test for unidirectional rnn_relu

2019-10-23 Thread GitBox
TaoLv merged pull request #16545: [mkldnn-v1.0] Skip flaky test for 
unidirectional rnn_relu
URL: https://github.com/apache/incubator-mxnet/pull/16545
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (05a4c4f -> c3395ca)

2019-10-23 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 05a4c4f  Create SECURITY.md (#16573)
 add c3395ca  [Numpy] Support N_D(N>=3) batch_dot (#16586)

No new revisions were added by this update.

Summary of changes:
 src/operator/tensor/dot-inl.h  | 177 +++--
 src/operator/tensor/dot.cc |  84 +---
 src/operator/tensor/dot.cu |   3 -
 tests/python/unittest/test_numpy_op.py | 119 ++
 tests/python/unittest/test_operator.py |   4 +-
 5 files changed, 249 insertions(+), 138 deletions(-)



[GitHub] [incubator-mxnet] haojin2 merged pull request #16586: [Numpy] Support N_D(N>=3) batch_dot

2019-10-23 Thread GitBox
haojin2 merged pull request #16586: [Numpy] Support N_D(N>=3) batch_dot
URL: https://github.com/apache/incubator-mxnet/pull/16586
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16586: [Numpy] Support N_D(N>=3) batch_dot

2019-10-23 Thread GitBox
haojin2 commented on issue #16586: [Numpy] Support N_D(N>=3) batch_dot
URL: https://github.com/apache/incubator-mxnet/pull/16586#issuecomment-545695391
 
 
   @eric-haibin-lin Can you take another look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Caenorst commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-10-23 Thread GitBox
Caenorst commented on issue #16408: Add MXNet Ops for fast multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-545695259
 
 
   @eric-haibin-lin Only with `bwd_ignore_zero_init=True`, otherwise it works 
find regardless. I let this flag hoping that `MXNET_EXEC_ENABLE_ADDTO` get back 
soon


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-23 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 70eda22  Bump the publish timestamp.
70eda22 is described below

commit 70eda22099b67fd73e659017f68da0c39a31993e
Author: mxnet-ci 
AuthorDate: Thu Oct 24 00:40:51 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..8c2cc04
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Oct 24 00:40:51 UTC 2019



[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #16408: Add MXNet Ops for fast multihead attention

2019-10-23 Thread GitBox
eric-haibin-lin commented on issue #16408: Add MXNet Ops for fast multihead 
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-545691096
 
 
   I can no longer find `MXNET_EXEC_ENABLE_ADDTO` in mxnet. Is this flag 
required? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #14994: Flaky test: test_lstm_clip

2019-10-23 Thread GitBox
TEChopra1000 commented on issue #14994: Flaky test: test_lstm_clip
URL: 
https://github.com/apache/incubator-mxnet/issues/14994#issuecomment-545689222
 
 
   Failed again... 
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16598/1/pipeline/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.6.x updated (df4125a -> 05a4c4f)

2019-10-23 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df4125a  update NEWS.md and README.md (#16385)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 add bf57ff8  added more tests to verify support for large vector (#16477)
 add de524bb  Fixing broken links (#16500)
 add a4ea4a8  Load NDArray only to GPU if GPU is present (#16432)
 add d1200c9  add binary and docs build command options (#16514)
 add e4f8c50  [MKLDNN] Fix uint quantized fc when not fusing with 
requantize (#16523)
 add f6cfbdf  improve unary and binary operator handling and refactor tests 
(#16423)
 add 77e9898  Bug fix for the input of same axes of the swapaxes operator 
(#16513)
 add f2ed1d4  added support for large tensors for Dropout operator and 
tests to verify support for more operators (#16409)
 add 63fbfb1  [DOC] Fix numpy op doc  (#16504)
 add f01bcaa  [Numpy] More numpy dispatch tests (#16426)
 add 27f7082  Fix learning rate scheduler being unexpectedly overwritten by 
optimizer's default value (#16487)
 add 73bff7d  adding large tensor support for add_n and tests for more ops 
(#16476)
 add 4b8a95f  add option to remove indexes (#16525)
 add 32bb374  disable tests (#16536)
 add efa5369  adding large tensor support for pad operator (#15126)
 add 2d4c3a4  fix pylint in CI (#16540)
 add 27b3e52  image crop gpu (#16464)
 add a75ec06  [Numpy] einsum (#15911)
 add 9fecfbb  Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
 add b583059  Numpy dispatch test of .. (#16422)
 add 149e034  typo fix in r doc lstm tutorial (#16546)
 add fc81c64  Correct Google Analytics Tracker (#16490)
 add ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)
 add c1d02ce  setup and concatenate, copy, expand_dims, expm1 (#16493)
 add cdfaf39  add sum for boolean type in mainline (#16436)
 add 1648f4c  [Numpy] SVD outputs tuple (#16530)
 add 5accae0  numpy op doc: max, min, prod (#16506)
 add b949716  add interface for rand
 add 217ae02  Fix numpy bugs (#16537)
 add 746cbc5  Add unit tests for TensorRT integration and fix some bugs 
(#15399)
 add 93ec1f2  [Doc] Use mirror link in the download page (#16501)
 add 06ce371  checking broken link fixes work (#16538)
 add 91bb398  [CD] Adds python docker pipeline (#16547)
 add 1fb6f00  Build dmlc-core with old thread_local impl

[GitHub] [incubator-mxnet] marcoabreu commented on issue #16599: Imagenet inference to nightly fix

2019-10-23 Thread GitBox
marcoabreu commented on issue #16599: Imagenet inference to nightly fix
URL: https://github.com/apache/incubator-mxnet/pull/16599#issuecomment-545685852
 
 
   That's output from the test. Can you make sure that this test case is not 
skipped


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.6.x updated (df4125a -> 05a4c4f)

2019-10-23 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df4125a  update NEWS.md and README.md (#16385)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 add bf57ff8  added more tests to verify support for large vector (#16477)
 add de524bb  Fixing broken links (#16500)
 add a4ea4a8  Load NDArray only to GPU if GPU is present (#16432)
 add d1200c9  add binary and docs build command options (#16514)
 add e4f8c50  [MKLDNN] Fix uint quantized fc when not fusing with 
requantize (#16523)
 add f6cfbdf  improve unary and binary operator handling and refactor tests 
(#16423)
 add 77e9898  Bug fix for the input of same axes of the swapaxes operator 
(#16513)
 add f2ed1d4  added support for large tensors for Dropout operator and 
tests to verify support for more operators (#16409)
 add 63fbfb1  [DOC] Fix numpy op doc  (#16504)
 add f01bcaa  [Numpy] More numpy dispatch tests (#16426)
 add 27f7082  Fix learning rate scheduler being unexpectedly overwritten by 
optimizer's default value (#16487)
 add 73bff7d  adding large tensor support for add_n and tests for more ops 
(#16476)
 add 4b8a95f  add option to remove indexes (#16525)
 add 32bb374  disable tests (#16536)
 add efa5369  adding large tensor support for pad operator (#15126)
 add 2d4c3a4  fix pylint in CI (#16540)
 add 27b3e52  image crop gpu (#16464)
 add a75ec06  [Numpy] einsum (#15911)
 add 9fecfbb  Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
 add b583059  Numpy dispatch test of .. (#16422)
 add 149e034  typo fix in r doc lstm tutorial (#16546)
 add fc81c64  Correct Google Analytics Tracker (#16490)
 add ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)
 add c1d02ce  setup and concatenate, copy, expand_dims, expm1 (#16493)
 add cdfaf39  add sum for boolean type in mainline (#16436)
 add 1648f4c  [Numpy] SVD outputs tuple (#16530)
 add 5accae0  numpy op doc: max, min, prod (#16506)
 add b949716  add interface for rand
 add 217ae02  Fix numpy bugs (#16537)
 add 746cbc5  Add unit tests for TensorRT integration and fix some bugs 
(#15399)
 add 93ec1f2  [Doc] Use mirror link in the download page (#16501)
 add 06ce371  checking broken link fixes work (#16538)
 add 91bb398  [CD] Adds python docker pipeline (#16547)
 add 1fb6f00  Build dmlc-core with old thread_local impl

[incubator-mxnet] branch v1.6.x updated (df4125a -> 05a4c4f)

2019-10-23 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df4125a  update NEWS.md and README.md (#16385)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 add bf57ff8  added more tests to verify support for large vector (#16477)
 add de524bb  Fixing broken links (#16500)
 add a4ea4a8  Load NDArray only to GPU if GPU is present (#16432)
 add d1200c9  add binary and docs build command options (#16514)
 add e4f8c50  [MKLDNN] Fix uint quantized fc when not fusing with 
requantize (#16523)
 add f6cfbdf  improve unary and binary operator handling and refactor tests 
(#16423)
 add 77e9898  Bug fix for the input of same axes of the swapaxes operator 
(#16513)
 add f2ed1d4  added support for large tensors for Dropout operator and 
tests to verify support for more operators (#16409)
 add 63fbfb1  [DOC] Fix numpy op doc  (#16504)
 add f01bcaa  [Numpy] More numpy dispatch tests (#16426)
 add 27f7082  Fix learning rate scheduler being unexpectedly overwritten by 
optimizer's default value (#16487)
 add 73bff7d  adding large tensor support for add_n and tests for more ops 
(#16476)
 add 4b8a95f  add option to remove indexes (#16525)
 add 32bb374  disable tests (#16536)
 add efa5369  adding large tensor support for pad operator (#15126)
 add 2d4c3a4  fix pylint in CI (#16540)
 add 27b3e52  image crop gpu (#16464)
 add a75ec06  [Numpy] einsum (#15911)
 add 9fecfbb  Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
 add b583059  Numpy dispatch test of .. (#16422)
 add 149e034  typo fix in r doc lstm tutorial (#16546)
 add fc81c64  Correct Google Analytics Tracker (#16490)
 add ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)
 add c1d02ce  setup and concatenate, copy, expand_dims, expm1 (#16493)
 add cdfaf39  add sum for boolean type in mainline (#16436)
 add 1648f4c  [Numpy] SVD outputs tuple (#16530)
 add 5accae0  numpy op doc: max, min, prod (#16506)
 add b949716  add interface for rand
 add 217ae02  Fix numpy bugs (#16537)
 add 746cbc5  Add unit tests for TensorRT integration and fix some bugs 
(#15399)
 add 93ec1f2  [Doc] Use mirror link in the download page (#16501)
 add 06ce371  checking broken link fixes work (#16538)
 add 91bb398  [CD] Adds python docker pipeline (#16547)
 add 1fb6f00  Build dmlc-core with old thread_local impl

[incubator-mxnet] branch v1.6.x updated (df4125a -> 05a4c4f)

2019-10-23 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df4125a  update NEWS.md and README.md (#16385)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 add bf57ff8  added more tests to verify support for large vector (#16477)
 add de524bb  Fixing broken links (#16500)
 add a4ea4a8  Load NDArray only to GPU if GPU is present (#16432)
 add d1200c9  add binary and docs build command options (#16514)
 add e4f8c50  [MKLDNN] Fix uint quantized fc when not fusing with 
requantize (#16523)
 add f6cfbdf  improve unary and binary operator handling and refactor tests 
(#16423)
 add 77e9898  Bug fix for the input of same axes of the swapaxes operator 
(#16513)
 add f2ed1d4  added support for large tensors for Dropout operator and 
tests to verify support for more operators (#16409)
 add 63fbfb1  [DOC] Fix numpy op doc  (#16504)
 add f01bcaa  [Numpy] More numpy dispatch tests (#16426)
 add 27f7082  Fix learning rate scheduler being unexpectedly overwritten by 
optimizer's default value (#16487)
 add 73bff7d  adding large tensor support for add_n and tests for more ops 
(#16476)
 add 4b8a95f  add option to remove indexes (#16525)
 add 32bb374  disable tests (#16536)
 add efa5369  adding large tensor support for pad operator (#15126)
 add 2d4c3a4  fix pylint in CI (#16540)
 add 27b3e52  image crop gpu (#16464)
 add a75ec06  [Numpy] einsum (#15911)
 add 9fecfbb  Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
 add b583059  Numpy dispatch test of .. (#16422)
 add 149e034  typo fix in r doc lstm tutorial (#16546)
 add fc81c64  Correct Google Analytics Tracker (#16490)
 add ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)
 add c1d02ce  setup and concatenate, copy, expand_dims, expm1 (#16493)
 add cdfaf39  add sum for boolean type in mainline (#16436)
 add 1648f4c  [Numpy] SVD outputs tuple (#16530)
 add 5accae0  numpy op doc: max, min, prod (#16506)
 add b949716  add interface for rand
 add 217ae02  Fix numpy bugs (#16537)
 add 746cbc5  Add unit tests for TensorRT integration and fix some bugs 
(#15399)
 add 93ec1f2  [Doc] Use mirror link in the download page (#16501)
 add 06ce371  checking broken link fixes work (#16538)
 add 91bb398  [CD] Adds python docker pipeline (#16547)
 add 1fb6f00  Build dmlc-core with old thread_local impl

[incubator-mxnet] branch v1.6.x updated (df4125a -> 05a4c4f)

2019-10-23 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from df4125a  update NEWS.md and README.md (#16385)
 add 0bace55  fix choice signature
 add ec766d5  add raise test for shape
 add d5666ed  Round and sign straight-through-estimators C operators. 
(#16373)
 add 15ea40d  Add boolean ndarray (#15940)
 add 1d0d1e6  Faster Transpose 2D (#16104)
 add 9ff644b  Fix windows flakiness (#16415)
 add a8181dd  [MXNET-1430] julia: implement context.gpu_memory_info (#16324)
 add 9dc0ab8  global numpy shape flag (#16335)
 add cfe9e50  Skipping installing nightly test (#16418)
 add a2018ba  cuDNN non-persistant bidirectional RNN dgrad sync fix (#16391)
 add 56e1bef  Adds PyPI CD Pipeline (#16190)
 add 88521ff  upgrade the pytest version (#16429)
 add 6ce323f  [DOC] fix installation selector wrong history (#16381)
 add 1ab4c95  New ops for RCNN + old ops improvements for RCNN (#16215)
 add e484f72  Beta build (#16411)
 add 243ade9  [WIP] Improving Python Docs API (#16392)
 add cf61364  Revert "add mkl installation temp fix (#16304)" (#16369)
 add d8193c6  Update add_op_in_backend.md (#16403)
 add 7f5e687  numpy-compatible histogram (#16266)
 add ca30ba8  Pseudo 2D transpose kernel (#16229)
 add d2d76dc  increase docker cache timeout (#16430)
 add 4dee4ee  Fix mkldnn reshape (#16455)
 add 1e8cc90  [BUGFIX] Minor type issues in Squeeze (#16448)
 add 858a52e  Fix large array tests (#16328)
 add 6d6e46b  Comparison ops implemented using mshadow (#16414)
 add 1d4ede3  Add mask target generator operator for Mask-RCNN (#16268)
 add 8820220  Adds pip requirements file to nightly gpu ci image (#16472)
 add 1256976  Fix Nightly Tests for Binaries (#16451)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)
 add c2bbde7  Tests of NumPy interoperability (#16469)
 add bf57ff8  added more tests to verify support for large vector (#16477)
 add de524bb  Fixing broken links (#16500)
 add a4ea4a8  Load NDArray only to GPU if GPU is present (#16432)
 add d1200c9  add binary and docs build command options (#16514)
 add e4f8c50  [MKLDNN] Fix uint quantized fc when not fusing with 
requantize (#16523)
 add f6cfbdf  improve unary and binary operator handling and refactor tests 
(#16423)
 add 77e9898  Bug fix for the input of same axes of the swapaxes operator 
(#16513)
 add f2ed1d4  added support for large tensors for Dropout operator and 
tests to verify support for more operators (#16409)
 add 63fbfb1  [DOC] Fix numpy op doc  (#16504)
 add f01bcaa  [Numpy] More numpy dispatch tests (#16426)
 add 27f7082  Fix learning rate scheduler being unexpectedly overwritten by 
optimizer's default value (#16487)
 add 73bff7d  adding large tensor support for add_n and tests for more ops 
(#16476)
 add 4b8a95f  add option to remove indexes (#16525)
 add 32bb374  disable tests (#16536)
 add efa5369  adding large tensor support for pad operator (#15126)
 add 2d4c3a4  fix pylint in CI (#16540)
 add 27b3e52  image crop gpu (#16464)
 add a75ec06  [Numpy] einsum (#15911)
 add 9fecfbb  Add test pipeline for USE_TVM_OP=OFF on Unix (#16450)
 add b583059  Numpy dispatch test of .. (#16422)
 add 149e034  typo fix in r doc lstm tutorial (#16546)
 add fc81c64  Correct Google Analytics Tracker (#16490)
 add ffec31f  Aggregated adamw update (#16398)
 add 5b67a69  try to fix block (#16465)
 add c1d02ce  setup and concatenate, copy, expand_dims, expm1 (#16493)
 add cdfaf39  add sum for boolean type in mainline (#16436)
 add 1648f4c  [Numpy] SVD outputs tuple (#16530)
 add 5accae0  numpy op doc: max, min, prod (#16506)
 add b949716  add interface for rand
 add 217ae02  Fix numpy bugs (#16537)
 add 746cbc5  Add unit tests for TensorRT integration and fix some bugs 
(#15399)
 add 93ec1f2  [Doc] Use mirror link in the download page (#16501)
 add 06ce371  checking broken link fixes work (#16538)
 add 91bb398  [CD] Adds python docker pipeline (#16547)
 add 1fb6f00  Build dmlc-core with old thread_local impl

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16532: fix dropout gpu seed

2019-10-23 Thread GitBox
eric-haibin-lin commented on a change in pull request #16532: fix dropout gpu 
seed
URL: https://github.com/apache/incubator-mxnet/pull/16532#discussion_r338328289
 
 

 ##
 File path: tests/nightly/test_dropout.py
 ##
 @@ -0,0 +1,49 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import sys
+import os
+curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'python', 
'unittest'))
+from common import with_seed
+import unittest
+
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import assert_almost_equal
+from nose.tools import assert_raises
+
+@with_seed()
 
 Review comment:
   would you mind adding a unit test for the RNN op, too? Thx! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16532: fix dropout gpu seed

2019-10-23 Thread GitBox
eric-haibin-lin commented on a change in pull request #16532: fix dropout gpu 
seed
URL: https://github.com/apache/incubator-mxnet/pull/16532#discussion_r338327895
 
 

 ##
 File path: src/operator/nn/dropout-inl.h
 ##
 @@ -253,7 +253,19 @@ class DropoutOp {
const TBlob &mask,
const TBlob &out) {
   Stream *s = ctx.get_stream();
-
+  Random *prnd = ctx.requested[1].get_random(s);
+  Tensor workspace =
+ctx.requested[2].get_space_typed(Shape1(1 * 
sizeof(unsigned)), s);
+  // slice workspace
+  char *workspace_ptr = workspace.dptr_;
+  Tensor random_number =
+Tensor(reinterpret_cast(workspace_ptr),
+ Shape1(1), s);
+  prnd->GetRandInt(random_number);
+  // copy generated random int to cpu
+  unsigned data = 0;
+  CUDA_CALL(cudaMemcpy(&data, &random_number[0], sizeof(unsigned), 
cudaMemcpyDeviceToHost));
+  uint64_t seed_ = 17 + static_cast(data) % 4096;
 
 Review comment:
   I already created one https://github.com/apache/incubator-mxnet/issues/16583 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
ChaiBapchya commented on issue #16585: C Api for simplebind, fix comment for 
trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#issuecomment-545681670
 
 
   Also addressed the suggestions by @anirudhacharya in #16416


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16416: Dgl ops 2

2019-10-23 Thread GitBox
ChaiBapchya commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r338327543
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -1351,6 +1407,48 @@ def check_tan():
 expected_output = [-.577, -1, 0, 1, .577]
 assert_correctness_of_trigonometric_ops(y, expected_output)
 
+def check_arcsinh():
+x = create_input_for_trigonometric_ops([-np.pi/2, -np.pi/4, 0, 
np.pi/4, np.pi/2])
+y = nd.arcsinh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying 
arcsinh()
+expected_output = [np.arcsinh(-np.pi/2), np.arcsinh(-np.pi/4), 0, 
np.arcsinh(np.pi/4), np.arcsinh(np.pi/2)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
+
+def check_arccosh():
+x = create_input_for_trigonometric_ops([1, np.pi/2, 3*np.pi/4, np.pi])
+y = nd.arccosh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying 
arccosh()
+expected_output = [0, np.arccosh(np.pi/2), np.arccosh(3*np.pi/4), 
np.arccosh(np.pi)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
+
+def check_arctanh():
+x = create_input_for_trigonometric_ops([-1/4, -1/2, 0, 1/4, 1/2])
+y = nd.arctanh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying 
arctanh()
+expected_output = [np.arctanh(-1/4), np.arctanh(-1/2), 0, 
np.arctanh(1/4), np.arctanh(1/2)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
+
+def check_sinh():
+x = create_input_for_trigonometric_ops([-np.pi/2, -np.pi/4, 0, 
np.pi/4, np.pi/2])
+y = nd.sinh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying sinh()
+expected_output = [np.sinh(-np.pi/2), np.sinh(-np.pi/4), 0, 
np.sinh(np.pi/4), np.sinh(np.pi/2)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
+
+def check_cosh():
+x = create_input_for_trigonometric_ops([0, 1, np.pi/2, 3*np.pi/4, 
np.pi])
+y = nd.cosh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying cosh()
+expected_output = [1, np.cosh(1), np.cosh(np.pi/2), 
np.cosh(3*np.pi/4), np.cosh(np.pi)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
+
+def check_tanh():
+x = create_input_for_trigonometric_ops([-1/4, -1/2, 0, 1/4, 1/2])
+y = nd.tanh(x)
+# expected ouput for indices=(0, 1, -3, -2, -1) after applying tanh()
+expected_output = [np.tanh(-1/4), np.tanh(-1/2), 0, np.tanh(1/4), 
np.tanh(1/2)]
+assert_correctness_of_trigonometric_ops(y, expected_output)
 
 Review comment:
   Addressed it here #16585 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #16431: [RFC] MXNet Multithreaded Inference Interface

2019-10-23 Thread GitBox
ptrendx commented on issue #16431: [RFC] MXNet Multithreaded Inference Interface
URL: 
https://github.com/apache/incubator-mxnet/issues/16431#issuecomment-545681158
 
 
   Hi @anirudh2290, what is the status of this proposal? When do you think 
changes will be ready?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338327118
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, g

[GitHub] [incubator-mxnet] haojin2 opened a new issue #16600: Clojure test failure: Could not transfer artifact com.fasterxml.jackson.core

2019-10-23 Thread GitBox
haojin2 opened a new issue #16600: Clojure test failure: Could not transfer 
artifact com.fasterxml.jackson.core
URL: https://github.com/apache/incubator-mxnet/issues/16600
 
 
   Encountered at 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-16597/4/pipeline/
 :
   Could not transfer artifact 
com.fasterxml.jackson.core:jackson-core:jar:2.9.0 from/to central 
(https://repo1.maven.org/maven2/): GET request of: 
com/fasterxml/jackson/core/jackson-core/2.9.0/jackson-core-2.9.0.jar from 
central failed
   Could not find artifact com.fasterxml.jackson.core:jackson-core:jar:2.9.0 in 
clojars (https://repo.clojars.org/)
   This could be due to a typo in :dependencies, file system permissions, or 
network issues.
   If you are behind a proxy, try setting the 'http_proxy' environment variable.
   @gigasquid Is this due to the flakiness of package serving infrastructure?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16599: Imagenet inference to nightly fix

2019-10-23 Thread GitBox
ChaiBapchya commented on issue #16599: Imagenet inference to nightly fix
URL: https://github.com/apache/incubator-mxnet/pull/16599#issuecomment-545680518
 
 
   Oh I didn't get you. Should I comment it? Remove it? Handle it?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (e22e93f -> 05a4c4f)

2019-10-23 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from e22e93f  fix missing docs due to git add issues (#16496)
 add 05a4c4f  Create SECURITY.md (#16573)

No new revisions were added by this update.

Summary of changes:
 tools/caffe_converter/README.md => SECURITY.md | 15 +--
 1 file changed, 5 insertions(+), 10 deletions(-)
 copy tools/caffe_converter/README.md => SECURITY.md (65%)



[GitHub] [incubator-mxnet] marcoabreu commented on issue #16599: Imagenet inference to nightly fix

2019-10-23 Thread GitBox
marcoabreu commented on issue #16599: Imagenet inference to nightly fix
URL: https://github.com/apache/incubator-mxnet/pull/16599#issuecomment-545676057
 
 
   "Skipped INT8 test because mkldnn was not found which is required for 
running inference with quantized models."
   
   Can you take care of that as well please?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] szha merged pull request #16573: Create SECURITY.md

2019-10-23 Thread GitBox
szha merged pull request #16573: Create SECURITY.md
URL: https://github.com/apache/incubator-mxnet/pull/16573
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16586: [Numpy] Support N_D(N>=3) batch_dot

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16586: [Numpy] Support N_D(N>=3) 
batch_dot
URL: https://github.com/apache/incubator-mxnet/pull/16586#discussion_r338321472
 
 

 ##
 File path: src/operator/tensor/dot.cc
 ##
 @@ -138,21 +138,73 @@ which is computed by::
 return std::vector{ResourceRequest::kTempSpace};
   })
 .set_attr("FCompute", BatchDotForward_)
-.set_attr("FGradient", 
ElemwiseGradUseIn{"_backward_batch_dot"})
+.set_attr("FGradient",
+[](const nnvm::NodePtr& n,
+   const std::vector& ograds) {
+  const DotParam& param = nnvm::get(n->attrs.parsed);
+  nnvm::NodePtr lhs_grad;
+  nnvm::NodePtr rhs_grad;
+  std::string lhs_gnode_name = n->attrs.name + "_backward_lhs";
+  std::string rhs_gnode_name = n->attrs.name + "_backward_rhs";
+  if (param.transpose_a && param.transpose_b) {
+// Gradient of z = dot(x.T, y.T)
+// dx = dot(dz, y).T = dot(y.T, dz.T)
+// dy = dot(x, dz).T = dot(dz.T, x.T)
+lhs_grad = MakeNode("batch_dot", lhs_gnode_name,
+{n->inputs[1], ograds[0]}, &(n->attrs.dict), &n);
+rhs_grad = MakeNode("batch_dot", rhs_gnode_name,
+{ograds[0], n->inputs[0]}, &(n->attrs.dict), &n);
+  } else if (!param.transpose_a && param.transpose_b) {
+// Gradient of z = dot(x, y.T)
+// dx = dot(dz, y)
+// dy = dot(x.T, dz).T = dot(dz.T, x)
+auto lhs_attrs_dict = n->attrs.dict;
+auto rhs_attrs_dict = n->attrs.dict;
+lhs_attrs_dict["transpose_a"] = "false";
+lhs_attrs_dict["transpose_b"] = "false";
+rhs_attrs_dict["transpose_a"] = "true";
+rhs_attrs_dict["transpose_b"] = "false";
+lhs_grad = MakeNode("batch_dot", lhs_gnode_name,
+{ograds[0], n->inputs[1]}, &lhs_attrs_dict, &n);
+rhs_grad = MakeNode("batch_dot", rhs_gnode_name,
+{ograds[0], n->inputs[0]}, &rhs_attrs_dict, &n);
+  } else if (param.transpose_a && !param.transpose_b) {
+// Gradient of z = dot(x.T, y)
+// dx = dot(dz, y.T).T = dot(y, dz.T)
+// dy = dot(x, dz)
+auto lhs_attrs_dict = n->attrs.dict;
+auto rhs_attrs_dict = n->attrs.dict;
+lhs_attrs_dict["transpose_a"] = "false";
+lhs_attrs_dict["transpose_b"] = "true";
+rhs_attrs_dict["transpose_a"] = "false";
+rhs_attrs_dict["transpose_b"] = "false";
+lhs_grad = MakeNode("batch_dot", lhs_gnode_name,
+{n->inputs[1], ograds[0]}, &lhs_attrs_dict, &n);
+rhs_grad = MakeNode("batch_dot", rhs_gnode_name,
+{n->inputs[0], ograds[0]}, &rhs_attrs_dict, &n);
+  } else {
+// Gradient of z = dot(x, y)
+// dx = dot(dz, y.T)
+// dy = dot(x.T, dz)
+auto lhs_attrs_dict = n->attrs.dict;
+auto rhs_attrs_dict = n->attrs.dict;
+lhs_attrs_dict["transpose_a"] = "false";
+lhs_attrs_dict["transpose_b"] = "true";
+rhs_attrs_dict["transpose_a"] = "true";
+rhs_attrs_dict["transpose_b"] = "false";
+lhs_grad = MakeNode("batch_dot", lhs_gnode_name,
+{ograds[0], n->inputs[1]}, &lhs_attrs_dict, &n);
+rhs_grad = MakeNode("batch_dot", rhs_gnode_name,
+{n->inputs[0], ograds[0]}, &rhs_attrs_dict, &n);
+  }
+  std::vector ret;
+  ret.emplace_back(nnvm::NodeEntry{lhs_grad, 0, 0});
+  ret.emplace_back(nnvm::NodeEntry{rhs_grad, 0, 0});
+  return ret;
+})
 .add_argument("lhs", "NDArray-or-Symbol", "The first input")
 .add_argument("rhs", "NDArray-or-Symbol", "The second input")
 .add_arguments(DotParam::__FIELDS__());
 
 
 Review comment:
   @eric-haibin-lin Here is the benchmark script for getting backward 
performance: https://gist.github.com/haojin2/c1a2bd1373530f4686bdefd2eafbee84
   Results:
   lhs: (32, 128, 768) rhs: (32, 128, 768) transpose_b: True  0.212037ms -> 
0.213933ms
   lhs: (32, 1, 768) rhs: (32, 128, 768) transpose_b: True 0.119977ms -> 
0.124208ms
   There's no obvious regression in performance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16599: Imagenet inference to nightly fix

2019-10-23 Thread GitBox
ChaiBapchya commented on issue #16599: Imagenet inference to nightly fix
URL: https://github.com/apache/incubator-mxnet/pull/16599#issuecomment-545664428
 
 
   Verified this works correctly here - 
http://jenkins.mxnet-ci-dev.amazon-ml.com/blue/organizations/jenkins/NightlyTestsForBinaries/detail/shared_library_nightly_fix/3/pipeline


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16599: Shared library nightly fix

2019-10-23 Thread GitBox
ChaiBapchya opened a new pull request #16599: Shared library nightly fix
URL: https://github.com/apache/incubator-mxnet/pull/16599
 
 
   ## Description ##
   PR #16577  was introduced to move imagenet inference to nightly
   However, I made the PR prematurely (without testing it completely)
   
   As it so happened 
(https://github.com/apache/incubator-mxnet/pull/16577/files#r337663583)
   Jenkins CI (NightlyTestForBinaries) failed coz of issue in
   
http://jenkins.mxnet-ci-dev.amazon-ml.com/blue/organizations/jenkins/NightlyTestsForBinaries/detail/move_imagenet_inference_nightly/20/pipeline/72
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
anirudh2290 commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338308956
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, g

[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
ChaiBapchya commented on issue #16585: C Api for simplebind, fix comment for 
trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#issuecomment-545661044
 
 
   ```
   nosetests tests/nightly/test_large_vector.py:test_regression
   .
   --
   Ran 1 test in 745.939s
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 opened a new pull request #16598: second round of fixing broken links in multiple files

2019-10-23 Thread GitBox
TEChopra1000 opened a new pull request #16598: second round of fixing broken 
links in multiple files
URL: https://github.com/apache/incubator-mxnet/pull/16598
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338301882
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, g

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16597: [Numpy] [WIP] Loading numpy-incompatible NDArray in numpy-compatible mode

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16597: [Numpy] [WIP] Loading 
numpy-incompatible NDArray in numpy-compatible mode
URL: https://github.com/apache/incubator-mxnet/pull/16597#discussion_r338294777
 
 

 ##
 File path: 
scala-package/native/src/main/native/org_apache_mxnet_native_c_api.cc
 ##
 @@ -2778,7 +2778,7 @@ JNIEXPORT jint JNICALL 
Java_org_apache_mxnet_LibInfo_mxDumpProfile
 JNIEXPORT jint JNICALL Java_org_apache_mxnet_LibInfo_mxIsNumpyShape
   (JNIEnv *env, jobject obj, jobject compatibleRef) {
   bool isNumpyShape;
 
 Review comment:
   Change `bool` to `int` here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (91ad266 -> e22e93f)

2019-10-23 Thread thomasdelteil
This is an automated email from the ASF dual-hosted git repository.

thomasdelteil pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 91ad266  fixed broken links across multiple files (#16581)
 add e22e93f  fix missing docs due to git add issues (#16496)

No new revisions were added by this update.

Summary of changes:
 .../python/api/gluon/{loss => data}/index.rst  | 44 ++--
 .../vision/datasets/index.rst} |  7 +--
 .../{contrib/onnx => gluon/data/vision}/index.rst  | 35 +++--
 .../{rnn => data/vision/transforms}/index.rst  | 60 +-
 .../python/api/mxnet/{rtc => log}/index.rst|  6 +--
 .../python/api/mxnet/{torch => model}/index.rst|  4 +-
 6 files changed, 104 insertions(+), 52 deletions(-)
 copy docs/python_docs/python/api/gluon/{loss => data}/index.rst (66%)
 copy docs/python_docs/python/api/gluon/{parameter.rst => 
data/vision/datasets/index.rst} (82%)
 copy docs/python_docs/python/api/{contrib/onnx => gluon/data/vision}/index.rst 
(67%)
 copy docs/python_docs/python/api/gluon/{rnn => 
data/vision/transforms}/index.rst (55%)
 copy docs/python_docs/python/api/mxnet/{rtc => log}/index.rst (93%)
 copy docs/python_docs/python/api/mxnet/{torch => model}/index.rst (95%)



[incubator-mxnet] branch master updated (91ad266 -> e22e93f)

2019-10-23 Thread thomasdelteil
This is an automated email from the ASF dual-hosted git repository.

thomasdelteil pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 91ad266  fixed broken links across multiple files (#16581)
 add e22e93f  fix missing docs due to git add issues (#16496)

No new revisions were added by this update.

Summary of changes:
 .../python/api/gluon/{loss => data}/index.rst  | 44 ++--
 .../vision/datasets/index.rst} |  7 +--
 .../{contrib/onnx => gluon/data/vision}/index.rst  | 35 +++--
 .../{rnn => data/vision/transforms}/index.rst  | 60 +-
 .../python/api/mxnet/{rtc => log}/index.rst|  6 +--
 .../python/api/mxnet/{torch => model}/index.rst|  4 +-
 6 files changed, 104 insertions(+), 52 deletions(-)
 copy docs/python_docs/python/api/gluon/{loss => data}/index.rst (66%)
 copy docs/python_docs/python/api/gluon/{parameter.rst => 
data/vision/datasets/index.rst} (82%)
 copy docs/python_docs/python/api/{contrib/onnx => gluon/data/vision}/index.rst 
(67%)
 copy docs/python_docs/python/api/gluon/{rnn => 
data/vision/transforms}/index.rst (55%)
 copy docs/python_docs/python/api/mxnet/{rtc => log}/index.rst (93%)
 copy docs/python_docs/python/api/mxnet/{torch => model}/index.rst (95%)



[GitHub] [incubator-mxnet] ThomasDelteil merged pull request #16496: fix missing docs due to git add issues

2019-10-23 Thread GitBox
ThomasDelteil merged pull request #16496: fix missing docs due to git add issues
URL: https://github.com/apache/incubator-mxnet/pull/16496
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil closed issue #16495: docs for gluon.data.* are missing

2019-10-23 Thread GitBox
ThomasDelteil closed issue #16495: docs for gluon.data.* are missing
URL: https://github.com/apache/incubator-mxnet/issues/16495
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
access2rohit commented on a change in pull request #16585: C Api for 
simplebind, fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338291832
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, 

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338289964
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
+((0), (0), (0)),
+((2, 2), (2, 1), (2, 3)),
+((4, 3), (4, 4), (4, 1)),
+((2, 2, 2), (2, 4, 2), (2, 2, 2)),
+((0, 1, 1), (0, 1, 1), (0, 1, 1)),
+((2, 1), (2, 2), (2, 2))
+]
+types = ['float16', 'float32', 'float64', 'int8', 'int32', 'int64']
+for config in configs:
+for hybridize in [True, False]:
+for dtype in types:
+test_column_stack = TestColumnStack()
+if hybridize:
+test_column_stack.hybridize()
+rtol = 1e-3
+atol = 1e-5
+v = []
+v_np = []
+for i in range(3):
+v_np.append(_np.array(_np.random.uniform(-10.0, 10.0, 
config[i]), dtype=dtype))
+v.append(mx.nd.array(v_np[i]).as_np_ndarray())
+v[i].attach_grad()
+expected_np = _np.column_stack(v_np)
+with mx.autograd.record():
+mx_out = test_column_stack(*v)
+assert mx_out.shape == expected_np.shape
+assert_almost_equal(mx_out.asnumpy(), expected_np, rtol=rtol, 
atol=atol)
+
+# Test gradient
+mx_out.backward()
+for i in range(3):
+expected_grad = g(v_np[i])
+assert_almost_equal(v[i].grad.asnumpy(), expected_grad, 
rtol=rtol, atol=atol)
+
+# Test imperative once again
+mx_out = np.column_stack(v)
+expected_np = _np.column_stack(v_np)
+assert_almost_equal(mx_out.asnumpy(), expected_np, rtol=rtol, 
atol=atol)
+
 
 Review comment:
   2 blank lines here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338289541
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
+((0), (0), (0)),
+((2, 2), (2, 1), (2, 3)),
+((4, 3), (4, 4), (4, 1)),
+((2, 2, 2), (2, 4, 2), (2, 2, 2)),
+((0, 1, 1), (0, 1, 1), (0, 1, 1)),
+((2, 1), (2, 2), (2, 2))
 
 Review comment:
   Same here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338289283
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
 
 Review comment:
   Also think this might actually be a duplicate case of "column_stack of 2-D 
arrays with non-zero column sizes"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338288758
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
+((0), (0), (0)),
+((2, 2), (2, 1), (2, 3)),
+((4, 3), (4, 4), (4, 1)),
 
 Review comment:
   Or you could change this case to `((4, 3), (4, 0), (4, 1))`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
anirudh2290 commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338288416
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, g

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338288163
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3542,6 +3542,62 @@ def test_np_true_divide():
 assert_almost_equal(out_mx.asnumpy(), out_np, rtol=1e-3, atol=1e-3, 
use_broadcast=False)
 
 
+@with_seed()
+@use_np
+def test_np_column_stack():
+class TestColumnStack(HybridBlock):
+def __init__(self):
+super(TestColumnStack, self).__init__()
+
+def hybrid_forward(self, F, a, *args):
+return F.np.column_stack([a] + list(args))
+
+def g(data):
+return _np.ones_like(data)
+
+configs = [
+((), (), ()),
+((2), (2), (2)),
+((1, 3), (1, 3), (1, 3)),
+((0), (0), (0)),
+((2, 2), (2, 1), (2, 3)),
+((4, 3), (4, 4), (4, 1)),
 
 Review comment:
   This case is a duplicate with the case above, remove either one of them


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
access2rohit commented on a change in pull request #16585: C Api for 
simplebind, fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338286910
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, 

[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
access2rohit commented on a change in pull request #16585: C Api for 
simplebind, fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338286910
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, 

[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
anirudh2290 commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338286200
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -515,44 +515,11 @@ int MXExecutorSimpleBind(SymbolHandle symbol_handle,
   API_END();
 }
 
-/*!
- * \brief
- * \param symbol_handle symbol handle
- * \param dev_type default device type
- * \param dev_id default device id
- * \param num_g2c_keys number of group2ctx keys
- * \param g2c_keys key list of group2ctx
- * \param g2c_dev_types device type list of group2ctx
- * \param g2c_dev_ids id list of group2ctx
- * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
- * \param provided_grad_req_names grad_req names provided by users in front-end
- * \param provided_grad_req_types req types provided by users in front-end
- * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
- * \param provided_arg_shape_names name list of provided shapes
- * \param provided_arg_shape_data provided shape data
- * \param provided_arg_shape_idx provided shape data index
- * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
- * \param provided_arg_dtype_names argument name list of provided dtypes
- * \param provided_arg_dtypes data of provided dtypes
- * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
- * \param provided_arg_stype_names argument name list of provided storage types
- * \param provided_arg_stypes data of provided storage types
- * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
- * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
- * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
- * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
- * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
- * \param updated_shared_buffer_name_list updated shared data array names 
after binding
- * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
- * \param num_in_args number of input arguments of this sym
- * \param in_args list_arguments associated with the current executor
- * \param arg_grads list of gradients of in_args associated with the current 
executor
- * \param num_aux_states number of aux states of this sym
- * \param aux_states list_auxiliary_states associated with the current executor
- * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
- * \param out the handle of the executor to be created
- */
-int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+
+namespace mxnet {
+
+template
+int SimpleBindExMaster(SymbolHandle symbol_handle,
 
 Review comment:
   _SimpleBindImpl 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16589: Fix index overflow bug in einsum

2019-10-23 Thread GitBox
sxjscience commented on a change in pull request #16589: Fix index overflow bug 
in einsum
URL: https://github.com/apache/incubator-mxnet/pull/16589#discussion_r338285002
 
 

 ##
 File path: tests/python/unittest/test_numpy_op.py
 ##
 @@ -3377,16 +3377,20 @@ def dbg(name, data):
 
_np.dot(args[0].T, _np.dot(_np.ones((2, 2)), args[2].T)),
 
_np.dot(_np.dot(args[0], args[1]).T, _np.ones((2, 2),
 # broadcast bug
-(('ij, ij -> i'), [(1, 4), (2, 4)], lambda *args: (_np.sum(args[1], 
axis=0)[None, :],
-   _np.tile(args[0], 
[2, 1]))),
+('ij, ij -> i', [(1, 4), (2, 4)], lambda *args: (_np.sum(args[1], 
axis=0)[None, :],
+ _np.tile(args[0], [2, 
1]))),
+# issue #16576
+# commented due to long running time
+# ('abiz,abjz->abij', [(64, 8, 128, 512), (64, 8, 128, 512)], lambda 
*args: (_np.matmul(_np.ones((64, 8, 128, 128)), args[1]),
+#  
  _np.matmul(_np.ones((64, 8, 128, 128)), args[0]))),
 ]
-dtypes = ['int32', 'float16', 'float32', 'float64']
+dtypes = ['int32', 'float32', 'float64']
 for hybridize in [False, True]:
 for dtype in dtypes:
 for config in configs:
 for optimize in [False, True]:
-rtol = 1e-0 if dtype == 'float16' else 1e-3
-atol = 1e-1 if dtype == 'float16' else 1e-5
+rtol = 1e-0 if dtype == 'float16' else 1e-1
+atol = 1e-1 if dtype == 'float16' else 1e-1
 
 Review comment:
   @hzfan You may also refer to this tutorial in TF: 
https://www.tensorflow.org/guide/keras/rnn


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
ChaiBapchya commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338283466
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -515,44 +515,11 @@ int MXExecutorSimpleBind(SymbolHandle symbol_handle,
   API_END();
 }
 
-/*!
- * \brief
- * \param symbol_handle symbol handle
- * \param dev_type default device type
- * \param dev_id default device id
- * \param num_g2c_keys number of group2ctx keys
- * \param g2c_keys key list of group2ctx
- * \param g2c_dev_types device type list of group2ctx
- * \param g2c_dev_ids id list of group2ctx
- * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
- * \param provided_grad_req_names grad_req names provided by users in front-end
- * \param provided_grad_req_types req types provided by users in front-end
- * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
- * \param provided_arg_shape_names name list of provided shapes
- * \param provided_arg_shape_data provided shape data
- * \param provided_arg_shape_idx provided shape data index
- * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
- * \param provided_arg_dtype_names argument name list of provided dtypes
- * \param provided_arg_dtypes data of provided dtypes
- * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
- * \param provided_arg_stype_names argument name list of provided storage types
- * \param provided_arg_stypes data of provided storage types
- * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
- * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
- * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
- * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
- * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
- * \param updated_shared_buffer_name_list updated shared data array names 
after binding
- * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
- * \param num_in_args number of input arguments of this sym
- * \param in_args list_arguments associated with the current executor
- * \param arg_grads list of gradients of in_args associated with the current 
executor
- * \param num_aux_states number of aux states of this sym
- * \param aux_states list_auxiliary_states associated with the current executor
- * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
- * \param out the handle of the executor to be created
- */
-int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+
+namespace mxnet {
+
+template
+int SimpleBindExMaster(SymbolHandle symbol_handle,
 
 Review comment:
   also @access2rohit suggested renaming it with SimpleBindExImpl
   So combining both suggestions - function name should be `_SimpleBindExImpl` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
anirudh2290 commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338282692
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -515,44 +515,11 @@ int MXExecutorSimpleBind(SymbolHandle symbol_handle,
   API_END();
 }
 
-/*!
- * \brief
- * \param symbol_handle symbol handle
- * \param dev_type default device type
- * \param dev_id default device id
- * \param num_g2c_keys number of group2ctx keys
- * \param g2c_keys key list of group2ctx
- * \param g2c_dev_types device type list of group2ctx
- * \param g2c_dev_ids id list of group2ctx
- * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
- * \param provided_grad_req_names grad_req names provided by users in front-end
- * \param provided_grad_req_types req types provided by users in front-end
- * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
- * \param provided_arg_shape_names name list of provided shapes
- * \param provided_arg_shape_data provided shape data
- * \param provided_arg_shape_idx provided shape data index
- * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
- * \param provided_arg_dtype_names argument name list of provided dtypes
- * \param provided_arg_dtypes data of provided dtypes
- * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
- * \param provided_arg_stype_names argument name list of provided storage types
- * \param provided_arg_stypes data of provided storage types
- * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
- * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
- * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
- * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
- * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
- * \param updated_shared_buffer_name_list updated shared data array names 
after binding
- * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
- * \param num_in_args number of input arguments of this sym
- * \param in_args list_arguments associated with the current executor
- * \param arg_grads list of gradients of in_args associated with the current 
executor
- * \param num_aux_states number of aux states of this sym
- * \param aux_states list_auxiliary_states associated with the current executor
- * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
- * \param out the handle of the executor to be created
- */
-int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+
+namespace mxnet {
+
+template
+int SimpleBindExMaster(SymbolHandle symbol_handle,
 
 Review comment:
   internal functions in C API generally begin with _


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16585: C Api for simplebind, fix comment for trigoops, add atol to assert

2019-10-23 Thread GitBox
anirudh2290 commented on a change in pull request #16585: C Api for simplebind, 
fix comment for trigoops, add atol to assert
URL: https://github.com/apache/incubator-mxnet/pull/16585#discussion_r338282191
 
 

 ##
 File path: src/c_api/c_api_executor.cc
 ##
 @@ -849,6 +816,152 @@ int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
   API_END();
 }
 
+}  // namespace mxnet
+
+
+/*!
+ * \brief
+ * \param symbol_handle symbol handle
+ * \param dev_type default device type
+ * \param dev_id default device id
+ * \param num_g2c_keys number of group2ctx keys
+ * \param g2c_keys key list of group2ctx
+ * \param g2c_dev_types device type list of group2ctx
+ * \param g2c_dev_ids id list of group2ctx
+ * \param provided_grad_req_list_len grad_req length provided by users in 
front-end
+ * \param provided_grad_req_names grad_req names provided by users in front-end
+ * \param provided_grad_req_types req types provided by users in front-end
+ * \param num_provided_arg_shapes number of user provided in_arg and aux_state 
shapes
+ * \param provided_arg_shape_names name list of provided shapes
+ * \param provided_arg_shape_data provided shape data
+ * \param provided_arg_shape_idx provided shape data index
+ * \param num_provided_arg_dtypes number of user provided in_arg and axu_state 
dtypes
+ * \param provided_arg_dtype_names argument name list of provided dtypes
+ * \param provided_arg_dtypes data of provided dtypes
+ * \param num_provided_arg_stypes number of user provided in_arg and axu_state 
storage types
+ * \param provided_arg_stype_names argument name list of provided storage types
+ * \param provided_arg_stypes data of provided storage types
+ * \param num_shared_arg_names number of parameter names passed from 
_bind_ith_exec
+ * \param shared_arg_name_list parameter name list passed from _bind_ith_exec
+ * \param shared_buffer_len number of shared data arrays passed from 
_bind_ith_exec
+ * \param shared_buffer_name_list shared data array names passed from 
_bind_ith_exec
+ * \param shared_buffer_handle_list shared data array handles passed from 
_bind_ith_exec
+ * \param updated_shared_buffer_name_list updated shared data array names 
after binding
+ * \param updated_shared_buffer_handle_list updated shared data arrays after 
binding
+ * \param num_in_args number of input arguments of this sym
+ * \param in_args list_arguments associated with the current executor
+ * \param arg_grads list of gradients of in_args associated with the current 
executor
+ * \param num_aux_states number of aux states of this sym
+ * \param aux_states list_auxiliary_states associated with the current executor
+ * \param shared_exec_handle shared excutor handle passed from _bind_ith_exec
+ * \param out the handle of the executor to be created
+ */
+int MXExecutorSimpleBindEx(SymbolHandle symbol_handle,
+   int dev_type,
+   int dev_id,
+   const uint32_t num_g2c_keys,
+   const char** g2c_keys,
+   const int* g2c_dev_types,
+   const int* g2c_dev_ids,
+   const uint32_t provided_grad_req_list_len,
+   const char** provided_grad_req_names,
+   const char** provided_grad_req_types,
+   const uint32_t num_provided_arg_shapes,
+   const char** provided_arg_shape_names,
+   const int* provided_arg_shape_data,
+   const uint32_t* provided_arg_shape_idx,
+   const uint32_t num_provided_arg_dtypes,
+   const char** provided_arg_dtype_names,
+   const int* provided_arg_dtypes,
+   const uint32_t num_provided_arg_stypes,
+   const char** provided_arg_stype_names,
+   const int* provided_arg_stypes,
+   const uint32_t num_shared_arg_names,
+   const char** shared_arg_name_list,
+   int* shared_buffer_len,
+   const char** shared_buffer_name_list,
+   NDArrayHandle* shared_buffer_handle_list,
+   const char*** updated_shared_buffer_name_list,
+   NDArrayHandle** updated_shared_buffer_handle_list,
+   uint32_t* num_in_args,
+   NDArrayHandle** in_args,
+   NDArrayHandle** arg_grads,
+   uint32_t* num_aux_states,
+   NDArrayHandle** aux_states,
+   ExecutorHandle shared_exec_handle,
+   ExecutorHandle* out) {
+  return mxnet::SimpleBindExMaster(symbol_handle,
+dev_type, dev_id,
+num_g2c_keys, g

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338282712
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -71,6 +79,80 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+template
+void NumpyColumnStackForward(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
 
 Review comment:
   Same applies to all other function signatures.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338276663
 
 

 ##
 File path: src/operator/numpy/np_matrix_op-inl.h
 ##
 @@ -71,6 +79,80 @@ void NumpyTranspose(const nnvm::NodeAttrs& attrs,
   }
 }
 
+template
+void NumpyColumnStackForward(const nnvm::NodeAttrs& attrs,
+const OpContext& ctx,
+const std::vector& inputs,
+const std::vector& req,
+const std::vector& outputs) {
 
 Review comment:
   Alignment:
   ```c++
   void NumpyColumnStackForward(const nnvm::NodeAttrs& attrs,
const OpContext& ctx,
const std::vector& inputs,
const std::vector& req,
const std::vector& outputs) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338276320
 
 

 ##
 File path: python/mxnet/numpy_dispatch_protocol.py
 ##
 @@ -119,6 +119,7 @@ def _run_with_array_ufunc_proto(*args, **kwargs):
 'var',
 'vdot',
 'vstack',
+# 'column_stack',
 
 Review comment:
   Why is this commented out?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338276158
 
 

 ##
 File path: python/mxnet/numpy/multiarray.py
 ##
 @@ -55,7 +55,7 @@
'swapaxes', 'clip', 'argmax', 'std', 'var', 'indices', 'copysign', 
'ravel', 'hanning', 'hamming',
'blackman', 'flip', 'around', 'arctan2', 'hypot', 'rad2deg', 
'deg2rad', 'unique', 'lcm', 'tril',
'identity', 'take', 'ldexp', 'vdot', 'inner', 'outer', 'equal', 
'not_equal', 'greater', 'less',
-   'greater_equal', 'less_equal', 'hsplit', 'rot90', 'einsum', 
'true_divide']
+   'greater_equal', 'less_equal', 'hsplit', 'rot90', 'einsum', 
'true_divide', 'column_stack']
 
 Review comment:
   Same comment on the positions of `column_stack`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on issue #16566: [CI test failure] test_fast_lars fails on windows gpu

2019-10-23 Thread GitBox
ptrendx commented on issue #16566: [CI test failure] test_fast_lars fails on 
windows gpu
URL: 
https://github.com/apache/incubator-mxnet/issues/16566#issuecomment-545631852
 
 
   @Caenorst 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338272478
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -4761,3 +4760,39 @@ def einsum(*operands, **kwargs):
 subscripts = operands[0]
 operands = operands[1:]
 return _npi.einsum(*operands, subscripts=subscripts, out=out, 
optimize=int(optimize_arg))
+
 
 Review comment:
   2 blank lines between all Python functions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16594: [Numpy] implement np.column_stack

2019-10-23 Thread GitBox
haojin2 commented on a change in pull request #16594: [Numpy] implement 
np.column_stack
URL: https://github.com/apache/incubator-mxnet/pull/16594#discussion_r338272300
 
 

 ##
 File path: python/mxnet/ndarray/numpy/_op.py
 ##
 @@ -38,8 +38,7 @@
'std', 'var', 'indices', 'copysign', 'ravel', 'hanning', 'hamming', 
'blackman', 'flip',
'around', 'hypot', 'rad2deg', 'deg2rad', 'unique', 'lcm', 'tril', 
'identity', 'take',
'ldexp', 'vdot', 'inner', 'outer', 'equal', 'not_equal', 'greater', 
'less', 'greater_equal', 'less_equal',
-   'hsplit', 'rot90', 'einsum', 'true_divide']
-
+   'hsplit', 'rot90', 'einsum', 'true_divide', 'column_stack']
 
 Review comment:
   Please move 'column_stack' to after `vstack` in this list.
   Also please move the function definition to after `vstack` as well in the 
file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16587: How could I set the location of openblas/lapack when I compile mxnet from source?

2019-10-23 Thread GitBox
haojin2 commented on issue #16587: How could I set the location of 
openblas/lapack when I compile mxnet from source?
URL: 
https://github.com/apache/incubator-mxnet/issues/16587#issuecomment-545629395
 
 
   Does the CMake run without explicitly specifying your openblas and lapack 
locations?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 edited a comment on issue #16596: How to initialize a CPU tensor in custom cu file?

2019-10-23 Thread GitBox
haojin2 edited a comment on issue #16596: How to initialize a CPU tensor in 
custom cu file?
URL: 
https://github.com/apache/incubator-mxnet/issues/16596#issuecomment-545628138
 
 
   @vasusingla619 If you're asking how to create a temporary CPU memory space 
in a .cu file, you can simply do this:
   ```c++
   void YourFunction(...) {
   // Your code
   vector temp_buffer(workspace_size * 5, 0);
   Tensor workspace(temp_buffer.data(), Shape1(workspace * 
5));
   // Your code
   }
   ```
   Lemme know if this solves your question.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16596: How to initialize a CPU tensor in custom cu file?

2019-10-23 Thread GitBox
haojin2 commented on issue #16596: How to initialize a CPU tensor in custom cu 
file?
URL: 
https://github.com/apache/incubator-mxnet/issues/16596#issuecomment-545628138
 
 
   @vasusingla619 If you're asking how to create a temporary CPU memory space 
in a .cu file, you can simply do this:
   ```c++
   void YourFunction(...) {
   vector temp_buffer(workspace_size * 5, 0);
   Tensor workspace(temp_buffer.data(), Shape1(workspace * 
5));
   }
   ```
   Lemme know if this solves your question.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >