[GitHub] xcwanAndy opened a new issue #14280: Compiling from source code error

2019-02-27 Thread GitBox
xcwanAndy opened a new issue #14280: Compiling from source code error
URL: https://github.com/apache/incubator-mxnet/issues/14280
 
 
   ## Description
   Compiling from source code occurred ***link libzmq*** error.
   ## Environment info (Required)
   ```
   --Python Info--
   ('Version  :', '2.7.12')
   ('Compiler :', 'GCC 5.4.0 20160609')
   ('Build:', ('default', 'Nov 12 2018 14:36:49'))
   ('Arch :', ('64bit', 'ELF'))
   Pip Info---
   ('Version  :', '8.1.1')
   ('Directory:', '/usr/lib/python2.7/dist-packages/pip')
   --MXNet Info---
   ('Version  :', '1.3.1')
   ('Directory:', '/home/ubuntu/.local/lib/python2.7/site-packages/mxnet')
   ('Commit Hash   :', '19c501680183237d52a862e6ae1dc4ddc296305b')
   --System Info--
   ('Platform :', 'Linux-4.4.0-142-generic-x86_64-with-Ubuntu-16.04-xenial')
   ('system   :', 'Linux')
   ('node :', 'cpu15')
   ('release  :', '4.4.0-142-generic')
   ('version  :', '#168-Ubuntu SMP Wed Jan 16 21:00:45 UTC 2019')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'x86_64')
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):24
   On-line CPU(s) list:   0-23
   Thread(s) per core:2
   Core(s) per socket:6
   Socket(s): 2
   NUMA node(s):  2
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 62
   Model name:Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
   Stepping:  4
   CPU MHz:   2600.000
   CPU max MHz:   3100.
   CPU min MHz:   1200.
   BogoMIPS:  5201.74
   Virtualization:VT-x
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  15360K
   NUMA node0 CPU(s): 0-5,12-17
   NUMA node1 CPU(s): 6-11,18-23
   Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer 
aes xsave avx f16c rdrand lahf_lm epb ssbd ibrs ibpb stibp kaiser tpr_shadow 
vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm arat pln pts 
flush_l1d
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0010 
sec, LOAD: 1.4220 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.3412 sec, LOAD: 
1.2661 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.1701 sec, LOAD: 10.9662 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0634 sec, 
LOAD: 0.2553 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.2422 sec, LOAD: 
1.0355 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.6101 sec, LOAD: 
1.2469 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   I'm using Python.
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio): gcc
   
   MXNet commit hash:
   7c617ccc7a8655f3b93acdfac8aeee20eee2a778
   
   Build config:
   In CMakeLists.txt, I set:
   ```
   mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
   mxnet_option(USE_OLDCMAKECUDA "Build with old cmake cuda" OFF)
   mxnet_option(USE_NCCL "Use NVidia NCCL with CUDA" OFF)
   mxnet_option(USE_OPENCV   "Build with OpenCV support" ON)
   mxnet_option(USE_OPENMP   "Build with Openmp support" OFF)
   mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) # one 
could set CUDNN_ROOT for search path
   mxnet_option(USE_SSE  "Build with x86 SSE instruction support" 
ON IF NOT ARM)
   mxnet_option(USE_F16C "Build with x86 F16C instruction support" 
ON) # autodetects support if ON
   mxnet_option(USE_LAPACK   "Build with lapack support" ON)
   mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
   mxnet_option(USE_MKLML_MKL"Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE))
   mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) AND 
(CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64") AND (NOT CMAKE_CROSSCOMPILING))
   mxnet_option(USE_OPERATOR_TUNING  "Enable auto-tuning of operators" ON IF 
NOT MSVC)
   mxnet_option(USE_GPERFTOOLS   "Build with GPerfTools support (if found)" 
ON)
   mxnet_option(USE_JEMALLOC "Build with Jemalloc suppor

[GitHub] ZhennanQin commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
ZhennanQin commented on issue #14275: Register fake grad to subgraph and 
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468169906
 
 
   @xinyu-intel Please collect 
https://github.com/apache/incubator-mxnet/pull/14276 into this PR as @TaoLv 
suggests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mxnet-label-bot commented on issue #14280: Compiling from source code error

2019-02-27 Thread GitBox
mxnet-label-bot commented on issue #14280: Compiling from source code error
URL: 
https://github.com/apache/incubator-mxnet/issues/14280#issuecomment-468169831
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
ZhennanQin commented on issue #14275: Register fake grad to subgraph and 
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468169582
 
 
   Merge with correct order can have same benefit:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized 
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468168853
 
 
   Avoiding side effect and keeping master branch healthy is the benefit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding 
experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468168321
 
 
   Not sure I understand the `checkpointing`. Can you explain a bit more? I 
think we have memory planning pass to decide whether the data can be 
overwritten? Also there are NumPy-based framework like Theano and Chainer. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #14268: Add numpy module under root module mxnet

2019-02-27 Thread GitBox
wkcn commented on issue #14268: Add numpy module under root module mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468168003
 
 
   @reminisce 
   The block in gluon accepts the instance of mx.nd.NDArray or mx.nd.Symbol.
   We can distinguish between symbolic and imperative namespaces by the type of 
inputs.
   
   An Example:
   ```python
   import mxnet as mx
   
   
   class TestBlock(mx.gluon.nn.HybridBlock):
   def hybrid_forward(self, F, x):
   if isinstance(x, mx.nd.NDArray):
   print('the input is an instance of NDArray')
   elif isinstance(x, mx.sym.Symbol):
   print('the input is an instance of Symbol')
   return x
   
   
   block = TestBlock()
   a_nd = mx.nd.array([1, 2, 3])
   print(block(a_nd))
   a_sym = mx.sym.Variable('a')
   print(block(a_sym))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] diandianliu opened a new issue #14279: set mxnet single thread

2019-02-27 Thread GitBox
diandianliu opened a new issue #14279: set mxnet single thread
URL: https://github.com/apache/incubator-mxnet/issues/14279
 
 
   hi
I run my program on linux, use CPU, call the mxnet.so. I find my 
program have many threads, and not  call fork an other may have multithread 
function. it is may be the mxnet.so have mutlithread, and  set 
MXNET_CPU_WORKER_NTHREADS=1,MXNET_CPU_NNPACK_NTHREADS=1,MXNET_CPU_PRIORITY_NTHREADS=1,but
 not effect. how to set the mxnet.so run in single thread, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
ZhennanQin commented on issue #14275: Register fake grad to subgraph and 
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468164488
 
 
   @TaoLv Yes, there's side-effect if this is merged before 
https://github.com/apache/incubator-mxnet/pull/14276. So the correct merge 
order is to merge https://github.com/apache/incubator-mxnet/pull/14276  first. 
If you think it's not a big deal on reverting, I'm fine to make them into same 
PR, although I don't see any benefit from it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized 
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468162743
 
 
   @ZhennanQin I'm afraid this PR has side effect if it's merged before #14276 
. Reverting should not be a big deal as it only changes 9 lines. We always need 
a PR to revert changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 03c1f37  Bump the publish timestamp.
03c1f37 is described below

commit 03c1f3754c0ab0b29284e3aabab0e67362dd6bf6
Author: mxnet-ci 
AuthorDate: Thu Feb 28 07:07:03 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..16aee55
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Feb 28 07:07:03 UTC 2019



[GitHub] xinyu-intel commented on issue #14274: added mkldnn dependency for plugin compile target

2019-02-27 Thread GitBox
xinyu-intel commented on issue #14274: added mkldnn dependency for plugin 
compile target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468161272
 
 
   @TaoLv Build successfully on my local ubuntu environment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
ZhennanQin commented on issue #14275: Register fake grad to subgraph and 
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468160543
 
 
   @TaoLv I suggest not merge them together because this PR is a workaround 
while https://github.com/apache/incubator-mxnet/pull/14276 isn't. Then we can 
simply revert this PR when cached_op refactoring is done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14262: Fix NaN value comparisons in relu, max and min ops

2019-02-27 Thread GitBox
szha commented on issue #14262: Fix NaN value comparisons in relu, max and min 
ops
URL: https://github.com/apache/incubator-mxnet/pull/14262#issuecomment-468159110
 
 
   @anirudhacharya thanks for the explanation. should relu grad deal with nan 
in a special way?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized 
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468158121
 
 
   Is it possible to merge this PR into #14276 ? @xinyu-intel @ZhennanQin 
@pengzhao-intel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
junrushao1994 commented on issue #14253: [RFC] Introducing NumPy-compatible 
coding experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468156743
 
 
   @TaoLv In neural nets, once you do backprop, you cannot overwrite data 
because it destroys checkpointing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14274: added mkldnn dependency for plugin compile target

2019-02-27 Thread GitBox
TaoLv commented on issue #14274: added mkldnn dependency for plugin compile 
target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468155829
 
 
   @xinyu-intel Could you help to check if this change fixes the issue?
   @samskalicky @marcoabreu Is it possible to add the build to CI?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding 
experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468155178
 
 
   @reminisce @szha NumPy has reference/view and stride in its NDArray 
structure whille MXNet.NDArray doesn't have. How does this impact the design of 
NumPy-compatible coding experience?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Bumblebee1964 commented on issue #14116: Failure in generated op.h in version 1.3.1

2019-02-27 Thread GitBox
Bumblebee1964 commented on issue #14116: Failure in generated op.h in version 
1.3.1
URL: 
https://github.com/apache/incubator-mxnet/issues/14116#issuecomment-468155093
 
 
   Please fix, it is annoying to work around this issue especially when trying 
to understand cpp samples as these won't build out of the box. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei opened a new pull request #14278: use cudnn for dropout by default

2019-02-27 Thread GitBox
roywei opened a new pull request #14278: use cudnn for dropout by default
URL: https://github.com/apache/incubator-mxnet/pull/14278
 
 
   ## Description ##
   enable cuddn for dropout after 
https://github.com/apache/incubator-mxnet/pull/13896
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   set cudnn_off default as false
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14223: fix memory-related issues to enable ASAN tests

2019-02-27 Thread GitBox
szha commented on issue #14223: fix memory-related issues to enable ASAN tests
URL: https://github.com/apache/incubator-mxnet/pull/14223#issuecomment-468150071
 
 
   @arcadiaphy thanks! Feel free to PR those changes to the respective repos. 
Once merged, you can change the submodules to point to the new commits there.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] arcadiaphy commented on issue #14223: fix memory-related issues to enable ASAN tests

2019-02-27 Thread GitBox
arcadiaphy commented on issue #14223: fix memory-related issues to enable ASAN 
tests
URL: https://github.com/apache/incubator-mxnet/pull/14223#issuecomment-468149375
 
 
   @szha Everything seems OK now, the only problem is I have changed the code 
in the submodule of mshadow and dmlc-core.
   
   @marcoabreu The asan log looks clean too.
   
[http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fmiscellaneous/detail/PR-14223/10/pipeline#step-159-log-763](http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fmiscellaneous/detail/PR-14223/10/pipeline#step-159-log-763)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin opened a new pull request #14277: Enhance PartitionGraph

2019-02-27 Thread GitBox
ZhennanQin opened a new pull request #14277: Enhance PartitionGraph
URL: https://github.com/apache/incubator-mxnet/pull/14277
 
 
   ## Description ##
   Extracted from https://github.com/apache/incubator-mxnet/pull/14113. This PR 
covers:
   
   * Add `inference_only` attr support when `SubgraphProperty` is created to 
indicate that this pass should be used for inference only.
   * Allow registering multiple subgraph pass within same backend names.
   * Refactor the way to run PartitionGraph in simple bind stage to ensure any 
graph nodes reordering can be correctly handled.
   
   This PR is the full version of 
https://github.com/apache/incubator-mxnet/pull/14276.
   @xinyu-intel @pengzhao-intel @TaoLv @reminisce @zheng-da
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] JohnLee168 commented on a change in pull request #12456: [MXNET-910] Multithreading inference.

2019-02-27 Thread GitBox
JohnLee168 commented on a change in pull request #12456: [MXNET-910] 
Multithreading inference.
URL: https://github.com/apache/incubator-mxnet/pull/12456#discussion_r261053384
 
 

 ##
 File path: src/c_api/c_predict_api.cc
 ##
 @@ -232,24 +223,117 @@ int MXPredCreatePartialOut(const char* symbol_json_str,
 }
 aux_arrays.push_back(nd);
   }
-  ret->arg_arrays = arg_arrays;
-  ret->aux_arrays = aux_arrays;
   // bind
-  {
-std::map ctx_map;
-std::vector grad_store(arg_arrays.size());
-std::vector grad_req(arg_arrays.size(), kNullOp);
-
-
-ret->exec.reset(Executor::Bind(sym, ctx, ctx_map,
-   arg_arrays,
-   grad_store, grad_req,
-   aux_arrays));
+  for (int i = 0; i < num_threads; i++) {
+std::unique_ptr ret(new MXAPIPredictor());
+ret->sym = sym;
+ret->ctx = ctx;
+ret->key2arg = key2arg;
+ret->arg_arrays = arg_arrays;
+ret->aux_arrays = aux_arrays;
 ret->out_shapes = out_shapes;
-ret->out_arrays = ret->exec->outputs();
+
+if (!lazy) {
 
 Review comment:
   > The fundamental problem here is that if we create multiple executors in 
the same thread (e.g., in the main thread), these executors will share the same 
temporary resources, which leads to race condition when these executors are 
used in different threads. To fix this problem, here we avoid creating 
executors when we create predictors in the main thread. The executors are 
actually created when the predictor is used in the worker thread for the first 
time. As long as the executor is always used in this worker thread, there won't 
be race condition.
   
   If I use 10 different PredictorHandles created by 
   MXPredCreate() in the main thread and for each PredictorHandle calling 
MXPredSetInput(), MXPredForward() and MXPredGetOutput() functions to inference 
in 10 threads, Is it safe?
   Thread can get one current available PredictorHandle to inference, 
therefore, one certain thread may get different PredictorHandle to inference. 
Is this safe?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #13896: Cudnn dropout

2019-02-27 Thread GitBox
szha commented on issue #13896: Cudnn dropout
URL: https://github.com/apache/incubator-mxnet/pull/13896#issuecomment-468144518
 
 
   @roywei by default cudnn_off is turned on. You need to turn it off to 
benefit from cudnn dropout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #13825: Dropout is Slow

2019-02-27 Thread GitBox
szha commented on issue #13825: Dropout is Slow
URL: 
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468144481
 
 
   @roywei by default cudnn_off is turned on. You need to turn it off to 
benefit from cudnn dropout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce edited a comment on issue #14268: Add numpy module under root module mxnet

2019-02-27 Thread GitBox
reminisce edited a comment on issue #14268: Add numpy module under root module 
mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468139721
 
 
   > In gluon, it is available that Block(nd) and Block(sym) are both supported.
   
   Can you elaborate? I don't understand this part.
   
   > It may be better to support mx.numpy.xxx(nd) and mx.numpy.xxx(sym), but 
using mx.nunpy in the forward of gluon block seems to be not more elegant than 
F.numpy.
   
   If implemented in this way, how would you distinguish mx.numpy.xxx between 
symbolic and imperative namespaces? The current design considers the minimum 
change required in Gluon for now and future. With the current design 
`F.numpy.op`, it would be very easy to just delete `F.` to eliminate `ndarray` 
and `symbol` namespaces in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13896: Cudnn dropout

2019-02-27 Thread GitBox
roywei commented on issue #13896: Cudnn dropout
URL: https://github.com/apache/incubator-mxnet/pull/13896#issuecomment-468141345
 
 
   I m not able to get the speed in the test case, see 
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468139928 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ZhennanQin opened a new pull request #14276: Skip inference only subgraph pass when gradient is needed.

2019-02-27 Thread GitBox
ZhennanQin opened a new pull request #14276: Skip inference only subgraph pass 
when gradient is needed.
URL: https://github.com/apache/incubator-mxnet/pull/14276
 
 
   ## Description ##
   Skip inference only subgraph pass when gradient is needed. Extracted from 
https://github.com/apache/incubator-mxnet/pull/14113
   
   @xinyu-intel @pengzhao-intel @TaoLv @reminisce @zheng-da 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #13825: Dropout is Slow

2019-02-27 Thread GitBox
roywei commented on issue #13825: Dropout is Slow
URL: 
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468139928
 
 
   after #13896, seems only `mx.gluon.nn.Dropout(0.5)` has performance 
improvements, but not  mx.nd.Dropout(data, 0.5, mode='always')
   
   following the above code sample:
   ### Using `mx.nd.Dropout`
   1.43 s ± 293 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
   ### Using Custom Dropout
   331 ms ± 411 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
   
   ### Network using `mx.gluon.nn.Dropout`
   128 ms ± 39.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
   
   ### Network without `mx.gluon.nn.Dropout`
   128 ms ± 30.2 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
   
   tested with both `pip install mxnet-cu92 --pre` and build from source on 
p3.2x 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #14268: Add numpy module under root module mxnet

2019-02-27 Thread GitBox
reminisce commented on issue #14268: Add numpy module under root module mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468139721
 
 
   > In gluon, it is available that Block(nd) and Block(sym) are both supported.
   
   Can you elaborate? I don't understand this part.
   
   > It may be better to support mx.numpy.xxx(nd) and mx.numpy.xxx(sym), but 
using mx.nunpy in the forward of gluon block seems to be not more elegant than 
F.numpy.
   
   If implemented in this way, how would you differentiate mx.numpy.xxx between 
symbolic and imperative? The current design considers the minimum change 
required in Gluon for now and future. With the current design `F.numpy.op`, it 
would be very easy to just delete `F.` to eliminate `ndarray` and `symbol` 
namespaces in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
pengzhao-intel commented on issue #14275: Register fake grad to subgraph and 
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468129881
 
 
   MI, this is a temp solution to enable GluonCV INT8 flow and we will revert 
it after the improvement of CachedOP is done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
szha commented on issue #14253: [RFC] Introducing NumPy-compatible coding 
experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468126196
 
 
   @anirudh2290 
   
   > Why can't we add the operators under this namespace and make the interface 
changes for existing operators ?
   
   We can. However, there exist some operators in mxnet.ndarray whose names are 
the same as numpy counterparts while the behavior are slightly different, this 
means they cannot exist in the same namespace if we want to preserve backward 
compatibility. On the other hand, 2.0 is a good opportunity for fixing many of 
the existing problems besides the operator behaviors, so we'd likely want to 
take the time. Thus, to start now, having a new namespace would be the most 
straightforward way to go.
   
   > Have you also considered implementing a seperating numpy ndarray
   
   Yes. Creating different array types means we'd start to see diverging user 
code, with some in ndarray and some in numpy ndarray, which would become harder 
to migrate later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference problem

2019-02-27 Thread GitBox
JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference 
problem
URL: 
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
 
 
   > @JohnLee168 based on my understanding and documentation the 
MXPredCreateMultiThread() can be used only when the EngineType is NaiveEngine.
   > 
   > It is possible to reuse PredictorHandle created by MXPredCreate() in a 
single threaded environment by calling MXPredForward().
   > In case of multi-threaded environment, if you want to reuse 
PredictorHandle, you would have to keep the operations MXPredSetInput(), 
MXPredForward() and MXPredGetOutput() in the critical region protected by 
exclusive lock.
   > 
   > Since this is a question, please submit them on MXNet discussion forum 
(https://discuss.mxnet.io), where it will get a wider audience and allow other 
to learn as well.
   > I would propose to close this issue now in favor of the discussion forum 
issue you will file, please feel free to re-open if closed in error.
   > Thanks!"
   > 
   > @mxnet-label-bot add [Question, C API, Thread Safety]
   
   Thanks for your reply.
   So, you mean either I use one thread to inference, or use multiple threads 
which 
   MXPredSetInput(), MXPredForward() and MXPredGetOutput() shoudle be 
locked(MXPredSetInput(), MXPredForward() and MXPredGetOutput() functions can 
only be running in one thread at a time), Is that right?
   The truth is I use 10 different PredictorHandles and for each 
PredictorHandle calling MXPredSetInput(), MXPredForward() and MXPredGetOutput() 
functions to inference in 10 threads, Is it safe?
   I created a new topic on forum. You can close this issue. Thanks for your 
help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference problem

2019-02-27 Thread GitBox
JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference 
problem
URL: 
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
 
 
   > @JohnLee168 based on my understanding and documentation the 
MXPredCreateMultiThread() can be used only when the EngineType is NaiveEngine.
   > 
   > It is possible to reuse PredictorHandle created by MXPredCreate() in a 
single threaded environment by calling MXPredForward().
   > In case of multi-threaded environment, if you want to reuse 
PredictorHandle, you would have to keep the operations MXPredSetInput(), 
MXPredForward() and MXPredGetOutput() in the critical region protected by 
exclusive lock.
   > 
   > Since this is a question, please submit them on MXNet discussion forum 
(https://discuss.mxnet.io), where it will get a wider audience and allow other 
to learn as well.
   > I would propose to close this issue now in favor of the discussion forum 
issue you will file, please feel free to re-open if closed in error.
   > Thanks!"
   > 
   > @mxnet-label-bot add [Question, C API, Thread Safety]
   
   Thanks for your reply.
   So, you mean either I use one thread to inference, or use multiple threads 
which 
   MXPredSetInput(), MXPredForward() and MXPredGetOutput() shoudle be 
locked(MXPredSetInput(), MXPredForward() and MXPredGetOutput() functions can 
only be running in one thread at a time), Is that right?
   I created a new topic on forum. You can close this issue. Thanks for your 
help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] JohnLee168 commented on issue #14260: c/c++ multiple threads inference problem

2019-02-27 Thread GitBox
JohnLee168 commented on issue #14260: c/c++ multiple threads inference problem
URL: 
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
 
 
   > @JohnLee168 based on my understanding and documentation the 
MXPredCreateMultiThread() can be used only when the EngineType is NaiveEngine.
   > 
   > It is possible to reuse PredictorHandle created by MXPredCreate() in a 
single threaded environment by calling MXPredForward().
   > In case of multi-threaded environment, if you want to reuse 
PredictorHandle, you would have to keep the operations MXPredSetInput(), 
MXPredForward() and MXPredGetOutput() in the critical region protected by 
exclusive lock.
   > 
   > Since this is a question, please submit them on MXNet discussion forum 
(https://discuss.mxnet.io), where it will get a wider audience and allow other 
to learn as well.
   > I would propose to close this issue now in favor of the discussion forum 
issue you will file, please feel free to re-open if closed in error.
   > Thanks!"
   > 
   > @mxnet-label-bot add [Question, C API, Thread Safety]
   
   Thanks for your reply.
   So, you mean either I use one thread to inference, or use multiple threads 
which 
   MXPredSetInput(), MXPredForward() and MXPredGetOutput() shoudle be 
locked(MXPredSetInput(), MXPredForward() and MXPredGetOutput() functions can 
only be running in one thread at a time), Is that right?
   I created a new topic on forum. And i'll close this issue. Thanks for your 
help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
anirudh2290 commented on issue #14253: [RFC] Introducing NumPy-compatible 
coding experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468122597
 
 
   Thanks for the RFC!
   > It is just that users are encouraged to access NumPy operator APIs through 
`mxnet.numpy` to write pure imperative code and Gluon APIs for achieving hybrid 
coding experience.
   
   Earlier mxnet.ndarray was supposed to give you the experience of writing 
pure imperative code. Why can't we add the operators under this namespace and 
make the interface changes for existing operators ? Is there a list of 
operators which have diverged APIs for numpy and ndarray and can it be timed 
with 2.0  release?
   
   > We can keep the current behavior unchanged and implement a global switch 
for users to turn on for expecting NumPy-compatible results.
   If I understand correctly, even when using numpy namespace you need to 
toggle this switch(probably an env variable?) to obtain the correct slicing ? 
Have you also considered implementing a seperating numpy ndarray from base with 
specific functions for slicing like `__getitem__` implemented to avoid using 
this switch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 edited a comment on issue #14253: [RFC] Introducing NumPy-compatible coding experience into MXNet

2019-02-27 Thread GitBox
anirudh2290 edited a comment on issue #14253: [RFC] Introducing 
NumPy-compatible coding experience into MXNet
URL: 
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468122597
 
 
   Thanks for the RFC!
   > It is just that users are encouraged to access NumPy operator APIs through 
`mxnet.numpy` to write pure imperative code and Gluon APIs for achieving hybrid 
coding experience.
   
   Earlier mxnet.ndarray was supposed to give you the experience of writing 
pure imperative code. Why can't we add the operators under this namespace and 
make the interface changes for existing operators ? Is there a list of 
operators which have diverged APIs for numpy and ndarray and can it be timed 
with 2.0  release?
   
   > We can keep the current behavior unchanged and implement a global switch 
for users to turn on for expecting NumPy-compatible results.
   
   If I understand correctly, even when using numpy namespace you need to 
toggle this switch(probably an env variable?) to obtain the correct slicing ? 
Have you also considered implementing a seperating numpy ndarray from base with 
specific functions for slicing like `__getitem__` implemented to avoid using 
this switch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: pypi package description. manifest/setup.py update (#14255)

2019-02-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 7c617cc  pypi package description. manifest/setup.py update (#14255)
7c617cc is described below

commit 7c617ccc7a8655f3b93acdfac8aeee20eee2a778
Author: Sheng Zha 
AuthorDate: Wed Feb 27 19:02:09 2019 -0800

pypi package description. manifest/setup.py update (#14255)
---
 tools/pip/MANIFEST.in|  3 +++
 tools/pip/doc/CPU_ADDITIONAL.md  | 40 
 tools/pip/doc/CU100MKL_ADDITIONAL.md | 44 
 tools/pip/doc/CU100_ADDITIONAL.md| 44 
 tools/pip/doc/CU75MKL_ADDITIONAL.md  | 42 ++
 tools/pip/doc/CU75_ADDITIONAL.md | 42 ++
 tools/pip/doc/CU80MKL_ADDITIONAL.md  | 42 ++
 tools/pip/doc/CU80_ADDITIONAL.md | 42 ++
 tools/pip/doc/CU90MKL_ADDITIONAL.md  | 42 ++
 tools/pip/doc/CU90_ADDITIONAL.md | 42 ++
 tools/pip/doc/CU91MKL_ADDITIONAL.md  | 42 ++
 tools/pip/doc/CU91_ADDITIONAL.md | 42 ++
 tools/pip/doc/CU92MKL_ADDITIONAL.md  | 42 ++
 tools/pip/doc/CU92_ADDITIONAL.md | 42 ++
 tools/pip/doc/MKL_ADDITIONAL.md  | 40 
 tools/pip/doc/PYPI_README.md | 25 
 tools/pip/setup.py   | 17 +-
 17 files changed, 632 insertions(+), 1 deletion(-)

diff --git a/tools/pip/MANIFEST.in b/tools/pip/MANIFEST.in
index 8037b6a..1edefa0 100644
--- a/tools/pip/MANIFEST.in
+++ b/tools/pip/MANIFEST.in
@@ -16,6 +16,9 @@
 # under the License.
 
 include README
+include LICENSE
+include DISCLAIMER
+include NOTICE
 include mxnet/COMMIT_HASH
 recursive-include mxnet/tools *
 recursive-include mxnet *.py
diff --git a/tools/pip/doc/CPU_ADDITIONAL.md b/tools/pip/doc/CPU_ADDITIONAL.md
new file mode 100644
index 000..05be9e5
--- /dev/null
+++ b/tools/pip/doc/CPU_ADDITIONAL.md
@@ -0,0 +1,40 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Prerequisites
+-
+This package supports Linux, Mac OSX, and Windows platforms. You may also want 
to check:
+- [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
+- [mxnet-cu92mkl](https://pypi.python.org/pypi/mxnet-cu92mkl/) with CUDA-9.2 
support and MKLDNN support.
+- [mxnet-cu91](https://pypi.python.org/pypi/mxnet-cu91/) with CUDA-9.1 support.
+- [mxnet-cu91mkl](https://pypi.python.org/pypi/mxnet-cu91mkl/) with CUDA-9.1 
support and MKLDNN support.
+- [mxnet-cu90](https://pypi.python.org/pypi/mxnet-cu90/) with CUDA-9.0 support.
+- [mxnet-cu90mkl](https://pypi.python.org/pypi/mxnet-cu90mkl/) with CUDA-9.0 
support and MKLDNN support.
+- [mxnet-cu80](https://pypi.python.org/pypi/mxnet-cu80/) with CUDA-8.0 support.
+- [mxnet-cu80mkl](https://pypi.python.org/pypi/mxnet-cu80mkl/) with CUDA-8.0 
support and MKLDNN support.
+- [mxnet-cu75](https://pypi.python.org/pypi/mxnet-cu75/) with CUDA-7.5 support.
+- [mxnet-cu75mkl](https://pypi.python.org/pypi/mxnet-cu75mkl/) with CUDA-7.5 
support and MKLDNN support.
+- [mxnet-mkl](https://pypi.python.org/pypi/mxnet-mkl/) with MKLDNN support.
+
+To install for other platforms (e.g. Windows, Raspberry Pi/ARM) or other 
versions, check [Installing 
MXNet](https://mxnet.incubator.apache.org/versions/master/install/index.html) 
for instructions on building from source.
+
+Installation
+
+To install, use:
+```bash
+pip install mxnet
+```
diff --git a/tools/pip/doc/CU100MKL_ADDITIONAL.md 
b/tools/pip/doc/CU100MKL_ADDITIONAL.md
new file mode 100644
index 000..f47115c
--- /dev/null
+++ b/tools/pip/doc/CU100MKL_ADDITIONAL.md
@@ -0,0 +1,44 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Prerequisites
+-
+This package supports Linux and Windows platforms. You may also want to check:
+- [mxnet-cu100](https://pypi.python.org/pypi/mxnet-cu100/) with CUDA-10.0 
support.
+- [mxnet-cu92](https://pypi.python.org/pypi/mxnet-cu92/) with CUDA-9.2 support.
+- [mxnet-cu92mkl](https://pypi.python.org/pypi/mxnet-cu92mkl/) with CUDA-9.2 
support and MKLDNN support.
+- [mxnet-cu91](https://pypi.python.org/pypi/mxnet-cu91/) with CUDA-9.1 support.
+- [mxnet-cu91mkl](https://pypi.python.org/pypi/mxnet-cu91mkl/) with CUDA-9.1 
support and MKLDNN support.
+- [mxnet-cu90](https://pypi.python.org/pypi/mxnet-cu90/) with CUDA-9.0 support.
+- [mxnet-cu90mkl](https://pypi.python.org/pypi/mxnet-cu90mkl/) with CUDA-9.0 
support and MKLDNN support.
+- [mxnet-cu80](https://pypi.python.org/pypi/mxnet-cu80/) with CUDA-8.0 support.
+- [mxnet-cu80mkl](https://pypi.python.org/pypi

[GitHub] szha commented on issue #14255: pypi package description

2019-02-27 Thread GitBox
szha commented on issue #14255: pypi package description
URL: https://github.com/apache/incubator-mxnet/pull/14255#issuecomment-468118514
 
 
   cu75 script still works for the older versions so I think we can leave it in 
as a record.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha merged pull request #14255: pypi package description

2019-02-27 Thread GitBox
szha merged pull request #14255: pypi package description
URL: https://github.com/apache/incubator-mxnet/pull/14255
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] bj5546 commented on issue #14116: Failure in generated op.h in version 1.3.1

2019-02-27 Thread GitBox
bj5546 commented on issue #14116: Failure in generated op.h in version 1.3.1
URL: 
https://github.com/apache/incubator-mxnet/issues/14116#issuecomment-468118035
 
 
   release 1.40 has the same erro


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #14274: added mkldnn dependency for plugin compile target

2019-02-27 Thread GitBox
apeforest commented on issue #14274: added mkldnn dependency for plugin compile 
target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468116895
 
 
   Why do we need warpctc if mxnet already has native implementation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #14236: Makefile plugins target needs mkldnn dependency

2019-02-27 Thread GitBox
apeforest commented on issue #14236: Makefile plugins target needs mkldnn 
dependency
URL: 
https://github.com/apache/incubator-mxnet/issues/14236#issuecomment-468116777
 
 
   Why do we need warpctc if mxnet already has native implementation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261030214
 
 

 ##
 File path: include/mxnet/tuple.h
 ##
 @@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ *  Copyright (c) 2016 by Contributors
+ * \file mxnet/tuple.h
+ * \brief Data structure Tuple and TShape to store dynamic sized shapes.
+ */
+#ifndef MXNET_TUPLE_H_
+#define MXNET_TUPLE_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "nnvm/op_attr_types.h"
+#include "nnvm/graph_attr_types.h"
+#include "nnvm/graph.h"
+#include "nnvm/pass.h"
+
+namespace mxnet {
+
+/*!
+ * \brief A dynamic sized array data structure that is optimized for storing
+ *small number of elements with same type.
+ *
+ *  Data will be stored in stack when number of elements is small.
+ *  It is suitable to hold shape of Tensor.
+ *
+ * \tparam ValueType The type of data stored inside tuple.
+ * \sa TShape
+ */
+template
+class Tuple {
+ public:
+  /*! \brief default constructor */
+  Tuple() = default;
+  /*! \brief destructor */
+  inline ~Tuple() {
+delete [] data_heap_;
+  }
+  /*!
+   * \brief copy constructor from another tuple
+   * \param s the source tuple
+   */
+  inline Tuple(const Tuple& s) {
+this->assign(s.begin(), s.end());
+  }
+  /*!
+   * \brief constructor from initializer list
+   * \param init the initializer_list
+   */
+  inline Tuple(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief constructor from vector
+   * \param init the vector
+   */
+  inline Tuple(std::vector init) {  // NOLINT(runtime/explicit)
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief move constructor from Tuple
+   * \param src the source shape
+   */
+
+  inline Tuple(Tuple&& src) {   // NOLINT(runtime/explicit)
+this->swap(src);
+  }
+  /*!
+   * \brief construct the Tuple from content of iterator
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline Tuple(RandomAccessIterator begin,
+   RandomAccessIterator end) {
+this->assign(begin, end);
+  }
+  /*!
+   * \brief Assign content to tuple from iterator.
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline void assign(RandomAccessIterator begin,
+ RandomAccessIterator end) {
+this->SetDim(end - begin);
+std::copy(begin, end, this->begin());
+  }
+  /*!
+   * \brief Swap current object with other
+   * \param other another object to be swapped.
+   */
+  inline void swap(Tuple& other) {  // NOLINT(*)
+std::swap(ndim_, other.ndim_);
+std::swap(num_heap_allocated_, other.num_heap_allocated_);
+std::swap(data_stack_, other.data_stack_);
+std::swap(data_heap_, other.data_heap_);
+  }
+  /*!
+   * \brief assignment from another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(const Tuple& src) {
+this->assign(src.begin(), src.end());
+return *this;
+  }
+  /*!
+   * \brief assignment from rvalue of another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(Tuple&& src) {
+Tuple(std::move(src)).swap(*this);
+return *this;
+  }
+  /*!
+   * \brief assignment from initializer list
+   * \param init the source initializer list
+   * \return reference of self
+   */
+  inline Tuple &operator=(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+return *this;
+  }
+  /*!
+   * \return whether two tuple equals
+   * \param s the tuple to compare against
+   */
+  inline bool operator==(const Tuple &s) const {
+if (ndim_ != s.ndim_) return false;
+return std::equal(begin(), end(), s.begin());
+  }
+  /*!
+   * \return whether two tuple not equal
+   * \param s the tuple to compare against
+   */
+  inline bool operator!=(const Tuple &s) const {
+return !(*this 

[GitHub] xinyu-intel opened a new pull request #14275: Register fake grad to subgraph and quantized operators

2019-02-27 Thread GitBox
xinyu-intel opened a new pull request #14275: Register fake grad to subgraph 
and quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275
 
 
   ## Description ##
   **Motivation:**
   Register fake grad to subgraph and quantized operators to support loading 
back JSON files which contain inference_only operators as symbolblock to run 
gluon inference.
   
   @pengzhao-intel @TaoLv @ZhennanQin @reminisce 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] GengxinXu commented on issue #14254: MXNET library failing in R

2019-02-27 Thread GitBox
GengxinXu commented on issue #14254: MXNET library failing in R
URL: 
https://github.com/apache/incubator-mxnet/issues/14254#issuecomment-468107605
 
 
   > @GengxinXu yes, if you have the mxnet source code built, building the 
R-package can be done by following this - 
https://mxnet.incubator.apache.org/versions/master/install/osx_setup.html#building-mxnet-from-source-code
   
   @anirudhacharya Thanks!
   I followed your advice, but I still couldn't solve the problem, since the 
same Error still occurred.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky opened a new pull request #14274: added mkldnn dependency for plugin compile target

2019-02-27 Thread GitBox
samskalicky opened a new pull request #14274: added mkldnn dependency for 
plugin compile target
URL: https://github.com/apache/incubator-mxnet/pull/14274
 
 
   ## Description ##
   added mkldnn dependency for "plugin" compile target in Makefile. Without the 
change, building fails https://github.com/apache/incubator-mxnet/issues/14236
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261017652
 
 

 ##
 File path: include/mxnet/tuple.h
 ##
 @@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ *  Copyright (c) 2016 by Contributors
+ * \file mxnet/tuple.h
+ * \brief Data structure Tuple and TShape to store dynamic sized shapes.
+ */
+#ifndef MXNET_TUPLE_H_
+#define MXNET_TUPLE_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "nnvm/op_attr_types.h"
+#include "nnvm/graph_attr_types.h"
+#include "nnvm/graph.h"
+#include "nnvm/pass.h"
+
+namespace mxnet {
+
+/*!
+ * \brief A dynamic sized array data structure that is optimized for storing
+ *small number of elements with same type.
+ *
+ *  Data will be stored in stack when number of elements is small.
+ *  It is suitable to hold shape of Tensor.
+ *
+ * \tparam ValueType The type of data stored inside tuple.
+ * \sa TShape
+ */
+template
+class Tuple {
+ public:
+  /*! \brief default constructor */
+  Tuple() = default;
+  /*! \brief destructor */
+  inline ~Tuple() {
+delete [] data_heap_;
+  }
+  /*!
+   * \brief copy constructor from another tuple
+   * \param s the source tuple
+   */
+  inline Tuple(const Tuple& s) {
+this->assign(s.begin(), s.end());
+  }
+  /*!
+   * \brief constructor from initializer list
+   * \param init the initializer_list
+   */
+  inline Tuple(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief constructor from vector
+   * \param init the vector
+   */
+  inline Tuple(std::vector init) {  // NOLINT(runtime/explicit)
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief move constructor from Tuple
+   * \param src the source shape
+   */
+
+  inline Tuple(Tuple&& src) {   // NOLINT(runtime/explicit)
+this->swap(src);
+  }
+  /*!
+   * \brief construct the Tuple from content of iterator
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline Tuple(RandomAccessIterator begin,
+   RandomAccessIterator end) {
+this->assign(begin, end);
+  }
+  /*!
+   * \brief Assign content to tuple from iterator.
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline void assign(RandomAccessIterator begin,
+ RandomAccessIterator end) {
+this->SetDim(end - begin);
+std::copy(begin, end, this->begin());
+  }
+  /*!
+   * \brief Swap current object with other
+   * \param other another object to be swapped.
+   */
+  inline void swap(Tuple& other) {  // NOLINT(*)
+std::swap(ndim_, other.ndim_);
+std::swap(num_heap_allocated_, other.num_heap_allocated_);
+std::swap(data_stack_, other.data_stack_);
+std::swap(data_heap_, other.data_heap_);
+  }
+  /*!
+   * \brief assignment from another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(const Tuple& src) {
+this->assign(src.begin(), src.end());
+return *this;
+  }
+  /*!
+   * \brief assignment from rvalue of another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(Tuple&& src) {
+Tuple(std::move(src)).swap(*this);
+return *this;
+  }
+  /*!
+   * \brief assignment from initializer list
+   * \param init the source initializer list
+   * \return reference of self
+   */
+  inline Tuple &operator=(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+return *this;
+  }
+  /*!
+   * \return whether two tuple equals
+   * \param s the tuple to compare against
+   */
+  inline bool operator==(const Tuple &s) const {
+if (ndim_ != s.ndim_) return false;
+return std::equal(begin(), end(), s.begin());
+  }
+  /*!
+   * \return whether two tuple not equal
+   * \param s the tuple to compare against
+   */
+  inline bool operator!=(const Tuple &s) const {
+return !(*t

[GitHub] leleamol commented on issue #14260: c/c++ multiple threads inference problem

2019-02-27 Thread GitBox
leleamol commented on issue #14260: c/c++ multiple threads inference problem
URL: 
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468101141
 
 
   @JohnLee168 based on my understanding and documentation the 
MXPredCreateMultiThread() can be used only when the EngineType is NaiveEngine.
   
   It is possible to reuse  PredictorHandle created by MXPredCreate() in a 
single threaded environment by calling MXPredForward().
   In case of multi-threaded environment, if you want to reuse PredictorHandle, 
you would have to keep the operations MXPredSetInput(),  MXPredForward() and 
MXPredGetOutput() in the critical region protected by exclusive lock. 
   
   Since this is a question, please submit them on MXNet discussion forum 
(https://discuss.mxnet.io), where it will get a wider audience and allow other 
to learn as well.
I would propose to close this issue now in favor of the discussion forum 
issue you will file, please feel free to re-open if closed in error.
Thanks!"
   
   @mxnet-label-bot add [Question, C API, Thread Safety]
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261017652
 
 

 ##
 File path: include/mxnet/tuple.h
 ##
 @@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ *  Copyright (c) 2016 by Contributors
+ * \file mxnet/tuple.h
+ * \brief Data structure Tuple and TShape to store dynamic sized shapes.
+ */
+#ifndef MXNET_TUPLE_H_
+#define MXNET_TUPLE_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "nnvm/op_attr_types.h"
+#include "nnvm/graph_attr_types.h"
+#include "nnvm/graph.h"
+#include "nnvm/pass.h"
+
+namespace mxnet {
+
+/*!
+ * \brief A dynamic sized array data structure that is optimized for storing
+ *small number of elements with same type.
+ *
+ *  Data will be stored in stack when number of elements is small.
+ *  It is suitable to hold shape of Tensor.
+ *
+ * \tparam ValueType The type of data stored inside tuple.
+ * \sa TShape
+ */
+template
+class Tuple {
+ public:
+  /*! \brief default constructor */
+  Tuple() = default;
+  /*! \brief destructor */
+  inline ~Tuple() {
+delete [] data_heap_;
+  }
+  /*!
+   * \brief copy constructor from another tuple
+   * \param s the source tuple
+   */
+  inline Tuple(const Tuple& s) {
+this->assign(s.begin(), s.end());
+  }
+  /*!
+   * \brief constructor from initializer list
+   * \param init the initializer_list
+   */
+  inline Tuple(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief constructor from vector
+   * \param init the vector
+   */
+  inline Tuple(std::vector init) {  // NOLINT(runtime/explicit)
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief move constructor from Tuple
+   * \param src the source shape
+   */
+
+  inline Tuple(Tuple&& src) {   // NOLINT(runtime/explicit)
+this->swap(src);
+  }
+  /*!
+   * \brief construct the Tuple from content of iterator
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline Tuple(RandomAccessIterator begin,
+   RandomAccessIterator end) {
+this->assign(begin, end);
+  }
+  /*!
+   * \brief Assign content to tuple from iterator.
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline void assign(RandomAccessIterator begin,
+ RandomAccessIterator end) {
+this->SetDim(end - begin);
+std::copy(begin, end, this->begin());
+  }
+  /*!
+   * \brief Swap current object with other
+   * \param other another object to be swapped.
+   */
+  inline void swap(Tuple& other) {  // NOLINT(*)
+std::swap(ndim_, other.ndim_);
+std::swap(num_heap_allocated_, other.num_heap_allocated_);
+std::swap(data_stack_, other.data_stack_);
+std::swap(data_heap_, other.data_heap_);
+  }
+  /*!
+   * \brief assignment from another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(const Tuple& src) {
+this->assign(src.begin(), src.end());
+return *this;
+  }
+  /*!
+   * \brief assignment from rvalue of another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(Tuple&& src) {
+Tuple(std::move(src)).swap(*this);
+return *this;
+  }
+  /*!
+   * \brief assignment from initializer list
+   * \param init the source initializer list
+   * \return reference of self
+   */
+  inline Tuple &operator=(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+return *this;
+  }
+  /*!
+   * \return whether two tuple equals
+   * \param s the tuple to compare against
+   */
+  inline bool operator==(const Tuple &s) const {
+if (ndim_ != s.ndim_) return false;
+return std::equal(begin(), end(), s.begin());
+  }
+  /*!
+   * \return whether two tuple not equal
+   * \param s the tuple to compare against
+   */
+  inline bool operator!=(const Tuple &s) const {
+return !(*t

[GitHub] wagamama commented on a change in pull request #14222: Add more support for mxnet_to_coreml

2019-02-27 Thread GitBox
wagamama commented on a change in pull request #14222: Add more support for 
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261014524
 
 

 ##
 File path: tools/coreml/test/test_mxnet_converter.py
 ##
 @@ -192,6 +192,15 @@ def test_tiny_tanh_activation_random_input(self):
 net = mx.sym.Activation(net, name='tanh1', act_type="tanh")
 self._test_mxnet_model(net, input_shape=input_shape, mode='random')
 
+def test_tiny_prelu_leakyrelu_random_input(self):
+np.random.seed(1988)
+input_shape = (1, 10)
+net = mx.sym.Variable('data')
+net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=5)
+gamma = mx.sym.Variable('gamma')
+net = mx.sym.LeakyReLU(net, gamma=gamma, name='prelu1', 
act_type="prelu")
 
 Review comment:
   Done in da9f7cb


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wagamama commented on a change in pull request #14222: Add more support for mxnet_to_coreml

2019-02-27 Thread GitBox
wagamama commented on a change in pull request #14222: Add more support for 
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261014463
 
 

 ##
 File path: tools/coreml/test/test_mxnet_converter.py
 ##
 @@ -444,17 +473,39 @@ def test_tiny_conv_random_input_multi_filter(self):
 )
 self._test_mxnet_model(net, input_shape=input_shape, mode='random')
 
+def test_tiny_conv_random_input_multi_group(self):
+np.random.seed(1988)
+input_shape = (1, 16, 10, 10)
+num_filter = 16
+num_group = 4
+kernel = (5, 5)
+stride = (1, 1)
+pad = (0, 0)
+net = mx.sym.Variable('data')
+net = mx.symbol.Convolution(
+data=net,
+num_filter=num_filter,
+num_group=num_group,
+kernel=kernel,
+stride=stride,
+pad=pad,
+name='conv_1'
+)
+self._test_mxnet_model(net, input_shape=input_shape, mode='random')
+
 def test_conv_random(self):
 np.random.seed(1988)
 input_shape = (1, 3, 10, 10)
 num_filter = 64
+num_group = 1
 
 Review comment:
   Done in 0189933


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wagamama commented on a change in pull request #14222: Add more support for mxnet_to_coreml

2019-02-27 Thread GitBox
wagamama commented on a change in pull request #14222: Add more support for 
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261013420
 
 

 ##
 File path: tools/coreml/test/test_mxnet_converter.py
 ##
 @@ -192,6 +192,15 @@ def test_tiny_tanh_activation_random_input(self):
 net = mx.sym.Activation(net, name='tanh1', act_type="tanh")
 self._test_mxnet_model(net, input_shape=input_shape, mode='random')
 
+def test_tiny_prelu_leakyrelu_random_input(self):
+np.random.seed(1988)
+input_shape = (1, 10)
+net = mx.sym.Variable('data')
+net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=5)
+gamma = mx.sym.Variable('gamma')
+net = mx.sym.LeakyReLU(net, gamma=gamma, name='prelu1', 
act_type="prelu")
 
 Review comment:
   Sure, I will add "leaky" and "elu".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #14106: Question on ResNet C++ example with Cifar10 dataset

2019-02-27 Thread GitBox
leleamol commented on issue #14106: Question on ResNet C++ example with Cifar10 
dataset
URL: 
https://github.com/apache/incubator-mxnet/issues/14106#issuecomment-468095295
 
 
   @mxnet-label-bot  add [Pending Requester Info]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-02-27 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 99d03d8  Bump the publish timestamp.
99d03d8 is described below

commit 99d03d85ee4e4162dc360bcb51e367f84d46df1a
Author: mxnet-ci 
AuthorDate: Thu Feb 28 01:06:04 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..3729d94
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Thu Feb 28 01:06:04 UTC 2019



[GitHub] eric-haibin-lin commented on a change in pull request #14240: use safe accumulation for norm

2019-02-27 Thread GitBox
eric-haibin-lin commented on a change in pull request #14240: use safe 
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009436
 
 

 ##
 File path: src/operator/mxnet_op.h
 ##
 @@ -273,25 +273,42 @@ inline int get_num_threads(const int N) {
 }  \
 break; \
   case mshadow::kUint8:\
-LOG(FATAL) << "This operation only support "   \
-  "floating point types not uint8";\
+{  \
+  typedef uint8_t DType;   \
+  typedef uint8_t AType;   \
+  LOG(FATAL) << "This operation only support " \
+"floating point types not uint8";  \
+}  \
 break; \
   case mshadow::kInt8: \
-LOG(FATAL) << "This operation only support "   \
-  "floating point types not int8"; \
+{  \
+  typedef int8_t DType;\
+  typedef int8_t AType;\
 
 Review comment:
   can we support acc in int types, too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #14240: use safe accumulation for norm

2019-02-27 Thread GitBox
eric-haibin-lin commented on a change in pull request #14240: use safe 
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009149
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce-inl.cuh
 ##
 @@ -610,14 +609,22 @@ void ReduceImpl(cudaStream_t stream, const TBlob& small, 
const TBlob& lhs, const
 
 #undef KERNEL_UNROLL_SWITCH
 
-template
+template
 void Reduce(Stream *s, const TBlob& small, const OpReqType req,
 const Tensor& workspace, const TBlob& big) {
   if (req == kNullOp) return;
   cudaStream_t stream = Stream::GetStream(s);
   ReduceImplConfig config =
 ConfigureReduceImpl(small.shape_, big.shape_, NULL, NULL);
-  ReduceImpl(stream, small, req, big, workspace, 
config);
+  if (safe_acc) {
+MXNET_REAL_ACC_TYPE_SWITCH(mshadow::DataType::kFlag, DataType, 
AType, {
 
 Review comment:
   Need to support acc type for int, etc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #14240: use safe accumulation for norm

2019-02-27 Thread GitBox
eric-haibin-lin commented on a change in pull request #14240: use safe 
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009149
 
 

 ##
 File path: src/operator/tensor/broadcast_reduce-inl.cuh
 ##
 @@ -610,14 +609,22 @@ void ReduceImpl(cudaStream_t stream, const TBlob& small, 
const TBlob& lhs, const
 
 #undef KERNEL_UNROLL_SWITCH
 
-template
+template
 void Reduce(Stream *s, const TBlob& small, const OpReqType req,
 const Tensor& workspace, const TBlob& big) {
   if (req == kNullOp) return;
   cudaStream_t stream = Stream::GetStream(s);
   ReduceImplConfig config =
 ConfigureReduceImpl(small.shape_, big.shape_, NULL, NULL);
-  ReduceImpl(stream, small, req, big, workspace, 
config);
+  if (safe_acc) {
+MXNET_REAL_ACC_TYPE_SWITCH(mshadow::DataType::kFlag, DataType, 
AType, {
 
 Review comment:
   Need to support acc type for int, etc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol commented on issue #14265: Bug in Optimizer's serializeState and deserializeState methods (Scala)

2019-02-27 Thread GitBox
leleamol commented on issue #14265: Bug in Optimizer's serializeState and 
deserializeState methods (Scala)
URL: 
https://github.com/apache/incubator-mxnet/issues/14265#issuecomment-468086452
 
 
   @mxnet-label-bot add [Scala, Bug]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei commented on issue #14273: move choose_element_0index to operator

2019-02-27 Thread GitBox
roywei commented on issue #14273: move choose_element_0index to operator
URL: https://github.com/apache/incubator-mxnet/pull/14273#issuecomment-468081913
 
 
   cc @apeforest 
   @mxnet-label-bot add [Operator, pr-awaiting-review]
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roywei opened a new pull request #14273: move choose_element_0index to operator

2019-02-27 Thread GitBox
roywei opened a new pull request #14273: move choose_element_0index to operator
URL: https://github.com/apache/incubator-mxnet/pull/14273
 
 
   ## Description ##
   fix https://github.com/apache/incubator-mxnet/issues/7853
   move legacy choose_element_0index as an alias of pick operator.
   It's still used in Scala, CPP package and DQN example.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn edited a comment on issue #14268: Add numpy module under root module mxnet

2019-02-27 Thread GitBox
wkcn edited a comment on issue #14268: Add numpy module under root module mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468076387
 
 
   I think it is confusing that mx.numpy, mx.nd.numpy and mx.sym.numpy all 
exist.
   
   In gluon, it is available that Block(nd) and Block(sym) are both supported.
   
   It may be better to support mx.numpy.xxx(nd) and mx.numpy.xxx(sym), but 
using mx.nunpy in the forward of gluon block seems to be not more elegant than 
F.numpy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #14268: Add numpy module under root module mxnet

2019-02-27 Thread GitBox
wkcn commented on issue #14268: Add numpy module under root module mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468076387
 
 
   I think it is confusing that mx.numpy, mx.nd.numpy and mx.sym.numpy all 
exist.
   
   In gluon, it is available that Block(nd) and Block(sym) are both supported.
   
   It may be better to support mx.numpy.xxx(nd) and mx.numpy.xxx(sym)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] tag 1.4.0.rc0 deleted (was c84bb78)

2019-02-27 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to tag 1.4.0.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.4.0.rc0 was deleted! ***

 was c84bb78  Add bug fix #13686 to release note (#13691)

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[incubator-mxnet] tag 1.4.0.rc2 deleted (was e999a46)

2019-02-27 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to tag 1.4.0.rc2
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.4.0.rc2 was deleted! ***

 was e999a46  Use CPUPinned context in ImageRecordIOParser2 (#13980) 
(#13990)

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[incubator-mxnet] tag 1.4.0.rc1 deleted (was 45a1554)

2019-02-27 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to tag 1.4.0.rc1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.4.0.rc1 was deleted! ***

 was 45a1554  api change (#13905)

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[GitHub] stephenrawls commented on issue #14264: nd.reshape truncate values

2019-02-27 Thread GitBox
stephenrawls commented on issue #14264: nd.reshape truncate values
URL: 
https://github.com/apache/incubator-mxnet/issues/14264#issuecomment-468069818
 
 
   Not sure about others but I like this behavior.
   
   It allows me to create a maximum-sized array in imperative mode, and 
re-shape it to the right size each time through the loop at zero cost and with 
zero allocations.
   
   When running in training mode with autograd this will give you the error you 
want though:
   
   ```
   >>> import mxnet as mx
   >>> x = mx.nd.random.randn(10)
   >>> x.reshape(1,2)
   
   [[1.1630787 0.4838046]]
   
   >>> with mx.autograd.record():
   ...   x.reshape(1,2)
   ... 
   Traceback (most recent call last):
 File "", line 2, in 
 File "/usr/local/lib/python3.7/site-packages/mxnet/ndarray/ndarray.py", 
line 1062, in reshape
   ctypes.byref(handle)))
 File "/usr/local/lib/python3.7/site-packages/mxnet/base.py", line 251, in 
check_call
   raise MXNetError(py_str(_LIB.MXGetLastError()))
   mxnet.base.MXNetError: [15:15:07] src/ndarray/ndarray.cc:229: Check failed: 
shape_.Size() == shape.Size() (10 vs. 2) NDArray.Reshape: target shape must 
have the same size as current shape when recording with autograd.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] tag 1.4.0.rc3 deleted (was a03d59e)

2019-02-27 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to tag 1.4.0.rc3
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


*** WARNING: tag 1.4.0.rc3 was deleted! ***

 was a03d59e  Fix gtest build (#13926)

The revisions that were on this tag are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[GitHub] hetong007 commented on issue #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468066943
 
 
   @ankkhedia Since you have built cu80 for MXNET R 1.3, would you please help 
@piyushghai to build cu80? It is helpful for us to go through and fix the 
configuration, so that we can have a smooth pipeline for the R releases in the 
future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] tag 1.4.0 created (now a03d59e)

2019-02-27 Thread lanking
This is an automated email from the ASF dual-hosted git repository.

lanking pushed a change to tag 1.4.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at a03d59e  (commit)
No new revisions were added by this update.



[GitHub] mxnet-label-bot commented on issue #14272: [Clojure] Change the NDArray Example to use the ndarray formatted printer

2019-02-27 Thread GitBox
mxnet-label-bot commented on issue #14272: [Clojure] Change the NDArray Example 
to use the ndarray formatted printer
URL: 
https://github.com/apache/incubator-mxnet/issues/14272#issuecomment-468065368
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid opened a new issue #14272: [Clojure] Change the NDArray Example to use the ndarray formatted printer

2019-02-27 Thread GitBox
gigasquid opened a new issue #14272: [Clojure] Change the NDArray Example to 
use the ndarray formatted printer
URL: https://github.com/apache/incubator-mxnet/issues/14272
 
 
   The Scala package has a new toString method for NDArray that can print out 
content.
   
   This would be nice to showcase in our NDArray Tutorial.
   
   Example:
   
   ```clojure
   user=> (println (str (ndarray/ones [3 3])))
   [
[1.0,1.0,1.0]
[1.0,1.0,1.0]
[1.0,1.0,1.0]
   ]
   
   nil
   user=> 
   ```
   
   https://mxnet.apache.org/api/clojure/index.html
   https://mxnet.apache.org/api/clojure/ndarray.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mxnet-label-bot commented on issue #14271: [Clojure] Add helper function to convert formed vector to NDArray (infers shape)

2019-02-27 Thread GitBox
mxnet-label-bot commented on issue #14271: [Clojure] Add helper function to 
convert formed vector to NDArray (infers shape)
URL: 
https://github.com/apache/incubator-mxnet/issues/14271#issuecomment-468064686
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended labels: Feature


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid opened a new issue #14271: [Clojure] Add helper function to convert formed vector to NDArray (infers shape)

2019-02-27 Thread GitBox
gigasquid opened a new issue #14271: [Clojure] Add helper function to convert 
formed vector to NDArray (infers shape)
URL: https://github.com/apache/incubator-mxnet/issues/14271
 
 
   We have a ndarray function called `array` that will take a 1d clojure vector 
and a shape vector and turn it into a ndarray. The Scala function has a new 
helper function that allows you to pass in a dimensional float/double  array 
and have the NDArray create it and infer the shape.
   
   It would be nice to create an interop function for this.
   Example of interop:
   
   ```clojure
   user=> (def x [[1.0 2.0 3.0] [4.0 5.0 6.0]])
   #'user/x
   user=> (NDArray/toNDArray (to-array (mapv #(to-array %) x)) nil)
   #object[org.apache.mxnet.NDArray 0x382169db "[\n [1.0,2.0,3.0]\n 
[4.0,5.0,6.0]\n]\n"]
   user=> 
   ```
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on issue #14235: /usr/bin/ld: cannot find -lsatlas

2019-02-27 Thread GitBox
junrushao1994 commented on issue #14235: /usr/bin/ld: cannot find -lsatlas
URL: 
https://github.com/apache/incubator-mxnet/issues/14235#issuecomment-468064619
 
 
   [These 
lines](https://github.com/apache/incubator-mxnet/blob/master/Makefile#L176-L187)
 in Makefile checks if LAPACK is properly installed. Could you try again and 
let me know whether there is a warning?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 edited a comment on issue #14235: /usr/bin/ld: cannot find -lsatlas

2019-02-27 Thread GitBox
junrushao1994 edited a comment on issue #14235: /usr/bin/ld: cannot find 
-lsatlas
URL: 
https://github.com/apache/incubator-mxnet/issues/14235#issuecomment-468064619
 
 
   @HaichaoZhu 
   
   [These 
lines](https://github.com/apache/incubator-mxnet/blob/master/Makefile#L176-L187)
 in Makefile checks if LAPACK is properly installed. Could you try again and 
let me know whether there is a warning?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wagamama commented on a change in pull request #14222: Add more support for mxnet_to_coreml

2019-02-27 Thread GitBox
wagamama commented on a change in pull request #14222: Add more support for 
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r260981801
 
 

 ##
 File path: tools/coreml/test/test_mxnet_converter.py
 ##
 @@ -444,17 +473,39 @@ def test_tiny_conv_random_input_multi_filter(self):
 )
 self._test_mxnet_model(net, input_shape=input_shape, mode='random')
 
+def test_tiny_conv_random_input_multi_group(self):
+np.random.seed(1988)
+input_shape = (1, 16, 10, 10)
+num_filter = 16
+num_group = 4
+kernel = (5, 5)
+stride = (1, 1)
+pad = (0, 0)
+net = mx.sym.Variable('data')
+net = mx.symbol.Convolution(
+data=net,
+num_filter=num_filter,
+num_group=num_group,
+kernel=kernel,
+stride=stride,
+pad=pad,
+name='conv_1'
+)
+self._test_mxnet_model(net, input_shape=input_shape, mode='random')
+
 def test_conv_random(self):
 np.random.seed(1988)
 input_shape = (1, 3, 10, 10)
 num_filter = 64
+num_group = 1
 
 Review comment:
   It doesn't fail.
   I will rollback this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piyushghai commented on issue #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
piyushghai commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468064377
 
 
   > If I recall correctly, you need an earlier version of visual studio to 
build cu80. Have you tried that?
   
   Yes. I tried with VS 2015 (not 2017). Still did not work.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468063691
 
 
   If I recall correctly, you need an earlier version of visual studio to build 
cu80. Have you tried that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin merged pull request #14221: [op] add back support for scalar type rescale_grad argument for adamw_update/mp_adamw_update

2019-02-27 Thread GitBox
eric-haibin-lin merged pull request #14221: [op] add back support for scalar 
type rescale_grad argument for adamw_update/mp_adamw_update
URL: https://github.com/apache/incubator-mxnet/pull/14221
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


svn commit: r32684 - /release/incubator/mxnet/KEYS

2019-02-27 Thread lanking
Author: lanking
Date: Wed Feb 27 22:46:58 2019
New Revision: 32684

Log:
update key for Qing Lan

Modified:
release/incubator/mxnet/KEYS

Modified: release/incubator/mxnet/KEYS
==
--- release/incubator/mxnet/KEYS (original)
+++ release/incubator/mxnet/KEYS Wed Feb 27 22:46:58 2019
@@ -658,3 +658,93 @@ a4LYL628Ksuv1Yxn/Uhb5nDPxU5RKRDeogn07wta
 3Os=
 =XL0V
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa2048 2019-02-15 [SC] [expires: 2021-02-14]
+  0812952358B12DC30536E7E0C06916C3AB88ABFE
+uid   [ultimate] Qing Lan 
+sig 3C06916C3AB88ABFE 2019-02-15  Qing Lan 
+sub   rsa2048 2019-02-15 [E] [expires: 2021-02-14]
+sig  C06916C3AB88ABFE 2019-02-15  Qing Lan 
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQENBFwb8GEBCADjmi5ZXihPfPbLxhNpbD4HVdT7xGEQgfgkeTA4TdFyBP81zM0F
+dTHPQOhHzPGkHrVwUPt3ir2hS46q3L8wni3VRkUU8KPbqbS/a7Wl7LHFFS0lU36J
+3uQLElZOFITlaL1dl7cIv+c8xCfmOPlmEtNAPIB26sIM5qzc5l4xvNf1H0Oq0wo6
+VKCsYb4el4nys2U3UBYVQjGyBEwwemHQmFPKg6a2bc/2UhWn4Z+//g0hzIpYtT/S
+jua16r5SHy6BUtFGWfU6LQIwmxqc0TNqRdkDU0QY0A+nT6cgx6ghp/qxoLOhgken
+Uw8rutC/oFg2VzS9yVsJNrkbQq5Fl/Mz/wqNABEBAAG0HWxhbmtpbmcgPGxhbmtp
+bmdAbGFua2luZy5uZXQ+iQFUBBMBCAA+FiEECukpOOn0VXTzKWFpZ3gsZdQlyvsF
+Alwb8GECGwMFCQPCZwAFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQZ3gsZdQl
+yvtmiAgAnFxzWwPSoHZW9U8ejNtLXxqKmfrUVHDJKVVE0zZPEs9kEk95YZ99jqUS
+LagTZwdQOGCJe6Fc9XESW5KdzvnwQyJKwGPCIYDVA/CRuGDbs+tY6HCZ7V1DMGYm
+EJsQEzdqWKaNLEVNk1LB3uizEjTS3BIVaVZ0+uovv2joM1mgHX9obGZAZRpBm7zG
+TsDwQMuAaWpgnh+BQ9b8O0m6Mdlqh4lAHJmqLdwIz3rD1SLKst0WenjVtyBE1k5t
+gBSVfQeMThhcbPITUTLdhAmc27QrsJdDdsH42CVhIJMW6aQnBnEKE3QB+C6DzpTi
+UIKqPg99PXmBc9BvlugrZRM4KYiqOLkBDQRcG/BhAQgAzS6PeCCIwImwqSRRnwta
+R/7ZbjhMDbnbiHhBKjy08vt2tR8cRK8X7utVwd96u0b3a6J4k0VzMcDP7wBYiY0x
+GDbEVsmeXxiqh3+Dv2R+/CBebbswlMJqG+imEaGAAA1bk8U8RrjxuKAUOlyVjEaG
+m9flPKOo33SK7UTujfnpSZhA6s0wzRSMn3/26c6+/9vpo8pnqooiD+K7eMhIUfjB
++hFRDJxrEPg9pj3hCJhXA1aCIMrq8jjkm+8PyDQWdt9TdnHLD+b5X+DKNNBpZsIk
+IxNTqxNNLan4JZ+70z5m68Lqy4EEHFoHwhn8IFjxOMiJ+6MuP32UdYmvUD3owc8y
+nwARAQABiQE8BBgBCAAmFiEECukpOOn0VXTzKWFpZ3gsZdQlyvsFAlwb8GECGwwF
+CQPCZwAACgkQZ3gsZdQlyvvYwQf/ctlGApkvSxmhKgCTPBCrudGDpO3QsEF+bR6y
+w9WvyvniCt6t5M2A+QPOCqdBvNaP/4wP1H2XghKr9XZcTBlxAJc6pdwxQ6IH6SD8
+syEDVTp0oiQytuyWeNoTM9bOFsliCkjiAVphTDTuVrNEgqdnTMo2cDZgry6gwD0Q
+1ZOejcb7kIjiThzaKmgpPnnDNkiK2j25fiNhpiIzEVryLKoXvyoj//C/p8lm9KRk
+/2bzRm84uoRsUlrr6I5qizakoODkh5+DlzAaes1fv8ED11zzrlIZjh1vBBgDd3QT
+jurgAMn1+WYXhnTVUw8+JjWbmZX4NqJJGZyozkF7rgzI3A9CPJkBDQRcLmPJAQgA
+6P5AdfBp4QJKfJC96BjDIrXAFXTytD5wxBsyl1WUcq9vnkX6NZGEtTgsuR/F+FBt
+Za7fuDRf+ZT5J7sFQbKFJKRvR8shg1dqnIsoztBD0ESC46trD/YZrisDb09JUXN+
+zvha7jUisA7/pVwfenQM8CpnZJiuKIDU5qCTUr7qOjds6g9aCtUSOK41xLAn1y9b
+hKJko4xtjUQu6hXjRXXKltl1N0X5TQCw3sR105xTTWx9nAgFu7cc1uGGuOPCpSms
+FPkXFMz+75ff9eXtnmpM+9ZxE8c0jTN7p3U16nd2spsQOqQd83nKb3AbfmWrUYC6
+lmiXGlyOqf8fXSSj/d1ZWwARAQABtB5RaW5nIExhbiA8bGFua2luZzUyMEBsaXZl
+LmNvbT6JAVQEEwEIAD4WIQSEDPQiKrv4QoSvRxUvo3RD/I+Y+gUCXC5jyQIbAwUJ
+A8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRAvo3RD/I+Y+vjlB/4h3trK
+2wGFn396m/jqLqBGObCrOTQYJVycfKdndTEvzIm5/U1gU0Q2BzP25k3gYQjjYKDB
+ZN5ETrvRa+NUHzK9p4L4mEQpyWI1WR7jJ19Qg3wfjXW2lOJ8hVvGDILKm7JHEOFb
+xf8bUsIftAJ+eDs0dFkDEZT+VLGIXWghd236oUy/uYKL0nLofyFZL3IHiBRZkZQf
+6nW+wuYlNXJ+Gy8ATUOKmqbhKScOYLT88nvd6XK8ht3agvATTiEZNa7uWeSOZqp/
+Sx10jUKvKif1L3KiRADb461zQdijx1VOpj3OZPKwCViag14TGXot/YsjAHLCROPg
+RH7Ey+WHqMzrkI/EuQENBFwuY8kBCADG74gBceYTyUZbwP6y1xKElXacZ4r2aovT
+xoyEXGGovkVjR30GT4XWpKBbqFk5S+QtRnJJGCMeVeKbruZc2Jv5cNe5WL8t20SU
+shkBi89h2ZWOgw5c8h+5axjyJfPzWuq0PoXZZ0poFVuSojEFuQKotBWqalri3SKu
+E7dUlPQ1JzCj60dX03Lqb2fZZEuwNTf3VwXZZHCgRKw0jhvlr0yfZVcIx4C1Vizo
+rkmgZoKbTTkcjBwG4xU+QAwtZqtqGUcZsGwGRD/VPYoDwVknTWJN9JwGasM2gdtG
+y0IvhjHfhahZXPyiDsUpkvHyrjaKUwF9qkNLlnmV0LxRJYfNYkk1ABEBAAGJATwE
+GAEIACYWIQSEDPQiKrv4QoSvRxUvo3RD/I+Y+gUCXC5jyQIbDAUJA8JnAAAKCRAv
+o3RD/I+Y+ql8CADZJxKXAEhBN6ao0+OE2RM/+IPUgIuW/B33KxYN/SewVaUeZvIe
+Nu3TWLiFa4eTOjc2hNZxRmQRtpmX7faJaynuz/bsfdtdsscuWxNUUiF3Tdd9/Eon
+eFaywH8as9uFwYLRmfdb5cvst97XofQOJBEfolOgyGOEEuEVxs6bFa43/SSYCliE
+MjhxubT8NqQmIsHGQ4nltxCbmVZNKi7eAjms+QkWX5bqWMqXF5hJisagXvO1Jr75
+ncoq3Wp+X1b+9x8KDGCVd1F5+d0OksPrnNIQVOS8TMw6bJzXLw4yQni+1ZtYTw8p
+9eG+gakLwBnBvqs+7HmuvPNELZX5SsKfwRYzmQENBFxnEhcBCADFLKeuIwb8XnYd
+MsAEbQIyueHsON04/XJVvULVHlbpF7X4XyUU2aHxkydFN5yGtJZjE7/BXrPSeoK8
+0qtTblb7ZugUgnyhXaN97RqdD6Lv6kha6EFSylUG9Wyfi1Onr0zlgqQ5TwbaTbLJ
+UuzN5d7UIPNFZBrWgvH6AC982M3kMGBWEPRiJgCs/k601oX84g6HbynuxaySts1W
+N8mC1TjzVuTRq6Cbfr1RAkNcj3mz66DpZLnHsBUPuQ5u0gJTKZyEGRxtT02eXpMI
+61eP+79hnAwqZds9BXwO+3cCx/x3qJRLNDpR4mPq9QqETkqTIQd6JPjWPRh0OQDK
+300hDmizABEBAAG0HVFpbmcgTGFuIDxsYW5raW5nQGFwYWNoZS5vcmc+iQFUBBMB
+CAA+FiEECBKVI1ixLcMFNufgwGkWw6uIq/4FAlxnEhcCGwMFCQPCZwAFCwkIBwIG
+FQoJCAsCBBYCAwECHgECF4AACgkQwGkWw6uIq/5KDgf/S8XUCzm+aVJUAcx9BuQV
+SzvG8C1Bf/XrYXcm3NxEOMejdkt8FAnaPt9dZM3U8CaMu1a90iiuLyvr3aJvb7/K
+/WuaK79jFB+zz8bifQANXgwAQPiUjBU/8FNOjiEywaSYwIvy8SMo5lLs2edV4Y8g
+J73BU/WID3aH/BK4sclAs3brNY5le3B2GiAXhqCWB567ZTlc6vpf1DBaivVX8J0H
+cEBIDRuitSkWrFYGmx4Ddr7Jebkg

[incubator-mxnet] branch master updated: [op] add back support for scalar type rescale_grad argument for adamw_update/mp_adamw_update (#14221)

2019-02-27 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e3a51b5  [op] add back support for scalar type rescale_grad argument 
for adamw_update/mp_adamw_update (#14221)
e3a51b5 is described below

commit e3a51b5a3ed989bf1e9c9f53b56819b32957527f
Author: Haibin Lin 
AuthorDate: Wed Feb 27 14:46:33 2019 -0800

[op] add back support for scalar type rescale_grad argument for 
adamw_update/mp_adamw_update (#14221)

* support scalar

* remove two copies of documentation for adamw

* fix lint
---
 python/mxnet/ndarray/contrib.py | 26 +
 python/mxnet/ndarray/register.py| 11 +++
 python/mxnet/symbol/contrib.py  | 22 +
 python/mxnet/symbol/register.py | 11 +++
 src/operator/contrib/adamw.cc   | 10 ++
 src/operator/contrib/adamw.cu   |  4 ++--
 tests/python/unittest/test_contrib_optimizer.py | 12 
 7 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/python/mxnet/ndarray/contrib.py b/python/mxnet/ndarray/contrib.py
index 6bbee8a..74c355d 100644
--- a/python/mxnet/ndarray/contrib.py
+++ b/python/mxnet/ndarray/contrib.py
@@ -542,3 +542,29 @@ def isnan(data):
 
 """
 return data != data
+
+def adamw_update(weight, grad, mean, var, rescale_grad, lr, eta, beta1=0.9, 
beta2=0.999,
+ epsilon=1e-8, wd=0, clip_gradient=-1, out=None, name=None, 
**kwargs):
+if not isinstance(rescale_grad, ndarray.NDArray):
+rescale_grad = ndarray.full(shape=(1,), val=rescale_grad, 
ctx=weight.context)
+else:
+rescale_grad = rescale_grad.as_in_context(weight.context)
+return ndarray._internal._adamw_update(weight=weight, grad=grad, 
mean=mean, var=var,
+   rescale_grad=rescale_grad, lr=lr, 
eta=eta,
+   beta1=beta1, beta2=beta2, 
epsilon=epsilon,
+   wd=wd, clip_gradient=clip_gradient, 
out=out,
+   name=name, **kwargs)
+
+def mp_adamw_update(weight, grad, mean, var, weight32, rescale_grad, lr, eta, 
beta1=0.9,
+beta2=0.999, epsilon=1e-8, wd=0, clip_gradient=-1, 
out=None,
+name=None, **kwargs):
+if not isinstance(rescale_grad, ndarray.NDArray):
+rescale_grad = ndarray.full(shape=(1,), val=rescale_grad, 
ctx=weight.context)
+else:
+rescale_grad = rescale_grad.as_in_context(weight.context)
+return ndarray._internal._mp_adamw_update(weight=weight, grad=grad, 
mean=mean, var=var,
+  weight32=weight32,
+  rescale_grad=rescale_grad, 
lr=lr, eta=eta,
+  beta1=beta1, beta2=beta2, 
epsilon=epsilon,
+  wd=wd, 
clip_gradient=clip_gradient, out=out,
+  name=name, **kwargs)
diff --git a/python/mxnet/ndarray/register.py b/python/mxnet/ndarray/register.py
index 3b19a77..05d7f17 100644
--- a/python/mxnet/ndarray/register.py
+++ b/python/mxnet/ndarray/register.py
@@ -167,3 +167,14 @@ def _make_ndarray_function(handle, name, func_name):
 return ndarray_function
 
 _init_op_module('mxnet', 'ndarray', _make_ndarray_function)
+
+# Update operator documentation with added float support
+# Note that we can only do this after the op module is initialized
+# Otherwise the backend operators cannot be found
+# pylint: disable=wrong-import-position
+from .contrib import adamw_update, mp_adamw_update
+from ._internal import _adamw_update, _mp_adamw_update
+adamw_update.__doc__ = _adamw_update.__doc__.replace("rescale_grad : NDArray",
+ "rescale_grad : NDArray 
or float")
+mp_adamw_update.__doc__ = _mp_adamw_update.__doc__.replace("rescale_grad : 
NDArray",
+   "rescale_grad : 
NDArray or float")
diff --git a/python/mxnet/symbol/contrib.py b/python/mxnet/symbol/contrib.py
index a83227a..d1048df 100644
--- a/python/mxnet/symbol/contrib.py
+++ b/python/mxnet/symbol/contrib.py
@@ -727,3 +727,25 @@ def cond(pred, then_func, else_func, name="cond"):
 outputs = [result[i] for i in range(then_num_outputs)]
 outputs, _ = _regroup(outputs, then_fmt)
 return outputs
+
+def adamw_update(weight, grad, mean, var, rescale_grad, lr, eta, beta1=0.9, 
beta2=0.999,
+ epsilon=1e-8, wd=0, clip_gradient=-1, out=None, name=None, 
**kwargs):
+if not isinstance(rescale_grad, Symbol):
+rescale_grad = symbol.full(shape=(1,), val=rescale_grad)
+  

[GitHub] piyushghai commented on issue #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
piyushghai commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468057591
 
 
   > Do we still support `cu80` in 1.4 release?
   
   There were issues when I tried to make the packages for cu80. 
   I was following the steps given here : 
   
http://mxnet.incubator.apache.org/versions/master/install/build_from_source.html
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r260969038
 
 

 ##
 File path: include/mxnet/tuple.h
 ##
 @@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ *  Copyright (c) 2016 by Contributors
+ * \file mxnet/tuple.h
+ * \brief Data structure Tuple and TShape to store dynamic sized shapes.
+ */
+#ifndef MXNET_TUPLE_H_
+#define MXNET_TUPLE_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "nnvm/op_attr_types.h"
+#include "nnvm/graph_attr_types.h"
+#include "nnvm/graph.h"
+#include "nnvm/pass.h"
+
+namespace mxnet {
+
+/*!
+ * \brief A dynamic sized array data structure that is optimized for storing
+ *small number of elements with same type.
+ *
+ *  Data will be stored in stack when number of elements is small.
+ *  It is suitable to hold shape of Tensor.
+ *
+ * \tparam ValueType The type of data stored inside tuple.
+ * \sa TShape
+ */
+template
+class Tuple {
+ public:
+  /*! \brief default constructor */
+  Tuple() = default;
+  /*! \brief destructor */
+  inline ~Tuple() {
+delete [] data_heap_;
+  }
+  /*!
+   * \brief copy constructor from another tuple
+   * \param s the source tuple
+   */
+  inline Tuple(const Tuple& s) {
+this->assign(s.begin(), s.end());
+  }
+  /*!
+   * \brief constructor from initializer list
+   * \param init the initializer_list
+   */
+  inline Tuple(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief constructor from vector
+   * \param init the vector
+   */
+  inline Tuple(std::vector init) {  // NOLINT(runtime/explicit)
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief move constructor from Tuple
+   * \param src the source shape
+   */
+
+  inline Tuple(Tuple&& src) {   // NOLINT(runtime/explicit)
+this->swap(src);
+  }
+  /*!
+   * \brief construct the Tuple from content of iterator
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline Tuple(RandomAccessIterator begin,
+   RandomAccessIterator end) {
+this->assign(begin, end);
+  }
+  /*!
+   * \brief Assign content to tuple from iterator.
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline void assign(RandomAccessIterator begin,
+ RandomAccessIterator end) {
+this->SetDim(end - begin);
+std::copy(begin, end, this->begin());
+  }
+  /*!
+   * \brief Swap current object with other
+   * \param other another object to be swapped.
+   */
+  inline void swap(Tuple& other) {  // NOLINT(*)
+std::swap(ndim_, other.ndim_);
+std::swap(num_heap_allocated_, other.num_heap_allocated_);
+std::swap(data_stack_, other.data_stack_);
+std::swap(data_heap_, other.data_heap_);
+  }
+  /*!
+   * \brief assignment from another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(const Tuple& src) {
+this->assign(src.begin(), src.end());
+return *this;
+  }
+  /*!
+   * \brief assignment from rvalue of another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(Tuple&& src) {
+Tuple(std::move(src)).swap(*this);
+return *this;
+  }
+  /*!
+   * \brief assignment from initializer list
+   * \param init the source initializer list
+   * \return reference of self
+   */
+  inline Tuple &operator=(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+return *this;
+  }
+  /*!
+   * \return whether two tuple equals
+   * \param s the tuple to compare against
+   */
+  inline bool operator==(const Tuple &s) const {
+if (ndim_ != s.ndim_) return false;
+return std::equal(begin(), end(), s.begin());
+  }
+  /*!
+   * \return whether two tuple not equal
+   * \param s the tuple to compare against
+   */
+  inline bool operator!=(const Tuple &s) const {
+return !(*this 

[GitHub] reminisce commented on a change in pull request #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r260969465
 
 

 ##
 File path: include/mxnet/tuple.h
 ##
 @@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ *  Copyright (c) 2016 by Contributors
+ * \file mxnet/tuple.h
+ * \brief Data structure Tuple and TShape to store dynamic sized shapes.
+ */
+#ifndef MXNET_TUPLE_H_
+#define MXNET_TUPLE_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "nnvm/op_attr_types.h"
+#include "nnvm/graph_attr_types.h"
+#include "nnvm/graph.h"
+#include "nnvm/pass.h"
+
+namespace mxnet {
+
+/*!
+ * \brief A dynamic sized array data structure that is optimized for storing
+ *small number of elements with same type.
+ *
+ *  Data will be stored in stack when number of elements is small.
+ *  It is suitable to hold shape of Tensor.
+ *
+ * \tparam ValueType The type of data stored inside tuple.
+ * \sa TShape
+ */
+template
+class Tuple {
+ public:
+  /*! \brief default constructor */
+  Tuple() = default;
+  /*! \brief destructor */
+  inline ~Tuple() {
+delete [] data_heap_;
+  }
+  /*!
+   * \brief copy constructor from another tuple
+   * \param s the source tuple
+   */
+  inline Tuple(const Tuple& s) {
+this->assign(s.begin(), s.end());
+  }
+  /*!
+   * \brief constructor from initializer list
+   * \param init the initializer_list
+   */
+  inline Tuple(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief constructor from vector
+   * \param init the vector
+   */
+  inline Tuple(std::vector init) {  // NOLINT(runtime/explicit)
+this->assign(init.begin(), init.end());
+  }
+  /*!
+   * \brief move constructor from Tuple
+   * \param src the source shape
+   */
+
+  inline Tuple(Tuple&& src) {   // NOLINT(runtime/explicit)
+this->swap(src);
+  }
+  /*!
+   * \brief construct the Tuple from content of iterator
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline Tuple(RandomAccessIterator begin,
+   RandomAccessIterator end) {
+this->assign(begin, end);
+  }
+  /*!
+   * \brief Assign content to tuple from iterator.
+   * \param begin the beginning of iterator
+   * \param end end the end of the iterator
+   * \tparam RandomAccessIterator iterator type
+   */
+  template
+  inline void assign(RandomAccessIterator begin,
+ RandomAccessIterator end) {
+this->SetDim(end - begin);
+std::copy(begin, end, this->begin());
+  }
+  /*!
+   * \brief Swap current object with other
+   * \param other another object to be swapped.
+   */
+  inline void swap(Tuple& other) {  // NOLINT(*)
+std::swap(ndim_, other.ndim_);
+std::swap(num_heap_allocated_, other.num_heap_allocated_);
+std::swap(data_stack_, other.data_stack_);
+std::swap(data_heap_, other.data_heap_);
+  }
+  /*!
+   * \brief assignment from another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(const Tuple& src) {
+this->assign(src.begin(), src.end());
+return *this;
+  }
+  /*!
+   * \brief assignment from rvalue of another tuple.
+   * \param src source tuple
+   * \return reference of self
+   */
+  inline Tuple& operator=(Tuple&& src) {
+Tuple(std::move(src)).swap(*this);
+return *this;
+  }
+  /*!
+   * \brief assignment from initializer list
+   * \param init the source initializer list
+   * \return reference of self
+   */
+  inline Tuple &operator=(std::initializer_list init) {
+this->assign(init.begin(), init.end());
+return *this;
+  }
+  /*!
+   * \return whether two tuple equals
+   * \param s the tuple to compare against
+   */
+  inline bool operator==(const Tuple &s) const {
+if (ndim_ != s.ndim_) return false;
+return std::equal(begin(), end(), s.begin());
+  }
+  /*!
+   * \return whether two tuple not equal
+   * \param s the tuple to compare against
+   */
+  inline bool operator!=(const Tuple &s) const {
+return !(*this 

[GitHub] zhreshold commented on a change in pull request #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
zhreshold commented on a change in pull request #14259: Add Gluon Transformer 
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260969948
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -228,6 +228,54 @@ def forward(self, x):
 return image.random_size_crop(x, *self._args)[0]
 
 
+class Crop(HybridBlock):
 
 Review comment:
   there is an deprecated mx.sym.Crop op already, using `sym.image.Crop` may 
introduce some confusion again, can you propose a new name?
   
   For image transformation, since resize is supported, I guess CropResize is 
better?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
zhreshold commented on a change in pull request #14259: Add Gluon Transformer 
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260970190
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -228,6 +228,54 @@ def forward(self, x):
 return image.random_size_crop(x, *self._args)[0]
 
 
+class Crop(HybridBlock):
+"""Crop the input image with and optionally resize it.
+Makes a crop of the original image then optionally resize it to the 
specified size.
+Parameters
+--
+x0 : int
 
 Review comment:
   imo, `x` is better than `x0` since there is no`x1`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
zhreshold commented on a change in pull request #14259: Add Gluon Transformer 
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260970232
 
 

 ##
 File path: python/mxnet/gluon/data/vision/transforms.py
 ##
 @@ -228,6 +228,54 @@ def forward(self, x):
 return image.random_size_crop(x, *self._args)[0]
 
 
+class Crop(HybridBlock):
+"""Crop the input image with and optionally resize it.
+Makes a crop of the original image then optionally resize it to the 
specified size.
+Parameters
+--
+x0 : int
+Left boundary of the cropping area
+y0 : int
 
 Review comment:
   same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhreshold commented on a change in pull request #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
zhreshold commented on a change in pull request #14259: Add Gluon Transformer 
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260971167
 
 

 ##
 File path: src/operator/image/crop.cc
 ##
 @@ -0,0 +1,78 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*/
+
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file crop-cc.h
+ * \brief the image crop operator registration
+ */
+
+#include "mxnet/base.h"
+#include "crop-inl.h"
+#include "../operator_common.h"
+#include "../elemwise_op_common.h"
+
+namespace mxnet {
+namespace op {
+namespace image {
+
+DMLC_REGISTER_PARAMETER(CropParam);
+
+NNVM_REGISTER_OP(_image_crop)
+.describe(R"code(Crop an image NDArray of shape (H x W x C) or (N x H x W x C) 
+to the given size.
+Example:
+.. code-block:: python
+image = mx.nd.random.uniform(0, 255, (4, 2, 3)).astype(dtype=np.uint8)
+mx.nd.image.crop(image, 1, 1, 2, 2)
+[[[144  34   4]
+  [ 82 157  38]]
+
+ [[156 111 230]
+  [177  25  15]]]
+
+image = mx.nd.random.uniform(0, 255, (2, 4, 2, 
3)).astype(dtype=np.uint8)
+mx.nd.image.crop(image, 1, 1, 2, 2)
+ 35 198  50]
+   [242  94 168]]
+
+  [[223 119 129]
+   [249  14 154]]]
+
+
+  [[[137 215 106]
+[ 79 174 133]]
+
+   [[116 142 109]
+[ 35 239  50
+
+)code" ADD_FILELINE)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FInferShape", CropShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", Crop)
+.set_attr("FGradient", ElemwiseGradUseNone{ "_copy" })
 
 Review comment:
   copy gradient does not apply to crop op.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #14247: corrected a spelling

2019-02-27 Thread GitBox
szha commented on issue #14247: corrected a spelling
URL: https://github.com/apache/incubator-mxnet/pull/14247#issuecomment-468050552
 
 
   @pldeepesh would you mind doing a rebase?
   ```bash
   git remote add upstream https://github.com/apache/incubator-mxnet
   git pull upstream master --rebase
   # resolve conflicts if any
   git push --force
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 edited a comment on issue #14217: [DO NOT REVIEW] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
junrushao1994 edited a comment on issue #14217: [DO NOT REVIEW] Bring 
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14217#issuecomment-468044218
 
 
   Will send this to master's branch. So closing it for now.
   
   #14270


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #14270: [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple

2019-02-27 Thread GitBox
reminisce commented on issue #14270: [MXNET-1330] Bring nnvm::Tuple to 
mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#issuecomment-468048808
 
 
   Thanks for being willing to making so many micro-surgical changes scattered 
all over the places.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 removed a comment on issue #14266: Move TShape definition and necessary passes out of NNVM

2019-02-27 Thread GitBox
junrushao1994 removed a comment on issue #14266: Move TShape definition and 
necessary passes out of NNVM
URL: 
https://github.com/apache/incubator-mxnet/issues/14266#issuecomment-468040539
 
 
   Working on this #14217 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] junrushao1994 commented on issue #14266: Move TShape definition and necessary passes out of NNVM

2019-02-27 Thread GitBox
junrushao1994 commented on issue #14266: Move TShape definition and necessary 
passes out of NNVM
URL: 
https://github.com/apache/incubator-mxnet/issues/14266#issuecomment-468047318
 
 
   Working on this #14270 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468046563
 
 
   Do we still support `cu80` in 1.4 release?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] azai91 commented on issue #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
azai91 commented on issue #14259: Add Gluon Transformer Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#issuecomment-468045975
 
 
   got it, thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piyushghai commented on a change in pull request #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
piyushghai commented on a change in pull request #14269: Updated docs for 
R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#discussion_r260961050
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -1090,7 +1090,8 @@ You can [build MXNet-R from 
source](windows_setup.html#install-mxnet-package-for
   options(repos = cran)
   install.packages("mxnet")
 ```
-Change cu92 to cu80, cu90 or cu91 based on your CUDA toolkit version. 
Currently, MXNet supports these versions of CUDA.
+Change cu92 to cu80, cu90, cu91 or cuda 100 based on your CUDA toolkit 
version. Currently, MXNet supports these versions of CUDA.
 
 Review comment:
   Aah. My bad. 
   Done. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on a change in pull request #14269: Updated docs for R-package installation

2019-02-27 Thread GitBox
hetong007 commented on a change in pull request #14269: Updated docs for 
R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#discussion_r260960591
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -1090,7 +1090,8 @@ You can [build MXNet-R from 
source](windows_setup.html#install-mxnet-package-for
   options(repos = cran)
   install.packages("mxnet")
 ```
-Change cu92 to cu80, cu90 or cu91 based on your CUDA toolkit version. 
Currently, MXNet supports these versions of CUDA.
+Change cu92 to cu80, cu90, cu91 or cuda 100 based on your CUDA toolkit 
version. Currently, MXNet supports these versions of CUDA.
 
 Review comment:
   Shoud be `cu100` without the space. It will be part of a path.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] stu1130 edited a comment on issue #14259: Add Gluon Transformer Crop

2019-02-27 Thread GitBox
stu1130 edited a comment on issue #14259: Add Gluon Transformer Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#issuecomment-468044964
 
 
   @azai91 throw MXNet exception
   
https://github.com/apache/incubator-mxnet/blob/54e746b12018417df2c37e32a73bb0dee87df492/src/operator/image/crop-inl.h#L97-L100


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >