[GitHub] anirudhacharya commented on issue #12389: [Bug] Not able to detect out of bound index on an ndarray. potential memory overflow.

2018-08-29 Thread GitBox
anirudhacharya commented on issue #12389: [Bug] Not able to detect out of bound 
index on an ndarray. potential memory overflow.
URL: 
https://github.com/apache/incubator-mxnet/issues/12389#issuecomment-416846361
 
 
   @mxnet-label-bot [NDArray, Memory, Bug]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee commented on issue #12363: distributed training notebook tests

2018-08-29 Thread GitBox
meanmee commented on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416847397
 
 
   first of all, check 
out:https://github.com/apache/incubator-mxnet/tree/master/example/distributed_training
   install anaconda on all your machines
   run "pip install mxnet-cu80==1.2.1" on your all machines (or pip install 
mxnet-cu90 depends on your machine's env)
   make all your machines sshable without input password between each other 
   machine A to machine B
   cd  ~/.ssh
   ssh-keygen  -t  rsa
   two files are generated:  id_rsa is the secret key;and  id_rsa.pub is the 
public key
   in Machine B: vim ~/.ssh/authorized_keys, copy the contents in machine 
A:~/.ssh/id_rsa.pub here
   do the same things to make machine B to machine A sshable without password
   also, machine A to Machine A, machine B to machine B is needed
   then run 
python 
/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/mxnet/tools/launch.py -n 
2 -s 2 -H hosts --sync-dst-dir /home/xiaomin.wu/cifar10_dist --launcher ssh  
"/home/xiaomin.wu/anaconda2/bin/python cifar10_dist.py" 
   here we use /home/xiaomin.wu/anaconda2/bin/python instead of python, because 
if we just ues python here the machines may use /usr/bin/pythob, which will get 
you crazy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee edited a comment on issue #12363: distributed training notebook tests

2018-08-29 Thread GitBox
meanmee edited a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416847397
 
 
   hi, thankU guys, I solved the problemsm here is my solution:
   https://shimo.im/docs/JwobIyIK8ucMgc3r/ 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TccccD commented on issue #12287: Add stable nrm2 for L2_normalization

2018-08-29 Thread GitBox
TD commented on issue #12287: Add stable nrm2 for L2_normalization
URL: https://github.com/apache/incubator-mxnet/pull/12287#issuecomment-416850121
 
 
   I traversed all the use cases in the test, but none of them gave an error. 
This may be a running environment issue? What environment should I use to look 
for this bug?
   @haojin2 @piiswrong @leezu @anirudh2290 @szha


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] VanDavv commented on issue #3073: is BlockGrad symbol equal to setting propagate_down as false in caffe?

2018-08-29 Thread GitBox
VanDavv commented on issue #3073: is BlockGrad symbol equal to setting 
propagate_down as false in caffe?
URL: 
https://github.com/apache/incubator-mxnet/issues/3073#issuecomment-416852189
 
 
   @winstywang So, if I have pretrained model and I want to freeze first 90% of 
the layers, while on the others  i'd like to use gradient with weight decay, 
how can I do that in mxnet?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] VanDavv opened a new issue #12392: How to use BlockGrad with weight decay

2018-08-29 Thread GitBox
VanDavv opened a new issue #12392: How to use BlockGrad with weight decay
URL: https://github.com/apache/incubator-mxnet/issues/12392
 
 
   I saw in 
[this](https://github.com/apache/incubator-mxnet/issues/1340#issuecomment-174166248)
  and 
[this](https://github.com/apache/incubator-mxnet/issues/3073#issuecomment-241033513)
 issue comments, @winstywang suggests that using BlockGrad with weight decay is 
highly inadviseable. As I also asked in 
[this](https://github.com/apache/incubator-mxnet/issues/3073#issuecomment-416852189)
 comment, how can I use BlockGrad to freeze first layers of the net, while 
still using weight decay on the unfrozen ones?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TccccD edited a comment on issue #12287: Add stable nrm2 for L2_normalization

2018-08-29 Thread GitBox
TD edited a comment on issue #12287: Add stable nrm2 for L2_normalization
URL: https://github.com/apache/incubator-mxnet/pull/12287#issuecomment-416850121
 
 
   I traversed all the use cases in the test, but none of them gave an error. 
   ```
   @with_seed()
   def test_l2_normalization():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   nbatch = random.randint(1, 4)
   nchannel = random.randint(3, 5)
   height = random.randint(4, 6)
   check_l2_normalization((nbatch, nchannel, height), mode, dtype)
   width = random.randint(5, 7)
   check_l2_normalization((nbatch, nchannel, height, width), mode, 
dtype)
   
   #test_l2_normalization()
   
   
   #@with_seed()
   def test_l2_normalization2():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   for nbatch in range(1,5):
   for nchannel in range(3,6):
   for height in range(4,7):
   check_l2_normalization((nbatch, nchannel, height), 
mode, dtype)
   print((nbatch, nchannel, height), '...ok')
   for width in range(5,8):
   check_l2_normalization((nbatch, nchannel, 
height, width), mode, dtype)
   
   test_l2_normalization()
   ```
   
   This may be a running environment issue? What environment should I use to 
look for this bug?
   @haojin2 @piiswrong @leezu @anirudh2290 @szha


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TccccD edited a comment on issue #12287: Add stable nrm2 for L2_normalization

2018-08-29 Thread GitBox
TD edited a comment on issue #12287: Add stable nrm2 for L2_normalization
URL: https://github.com/apache/incubator-mxnet/pull/12287#issuecomment-416850121
 
 
   I traversed all the use cases in the test, but none of them gave an error. 
   ```
   @with_seed()
   def test_l2_normalization():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   nbatch = random.randint(1, 4)
   nchannel = random.randint(3, 5)
   height = random.randint(4, 6)
   check_l2_normalization((nbatch, nchannel, height), mode, dtype)
   width = random.randint(5, 7)
   check_l2_normalization((nbatch, nchannel, height, width), mode, 
dtype)
   
   #test_l2_normalization()
   
   
   #@with_seed()
   def test_l2_normalization2():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   for nbatch in range(1,5):
   for nchannel in range(3,6):
   for height in range(4,7):
   check_l2_normalization((nbatch, nchannel, height), 
mode, dtype)
   print((nbatch, nchannel, height), '...ok')
   for width in range(5,8):
   check_l2_normalization((nbatch, nchannel, 
height, width), mode, dtype)
   
   test_l2_normalization()
   ```
   
   I found this error every time in asnumpy.
   This may be a running environment issue? What environment should I use to 
look for this bug?
   @haojin2 @piiswrong @leezu @anirudh2290 @szha


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TccccD edited a comment on issue #12287: Add stable nrm2 for L2_normalization

2018-08-29 Thread GitBox
TD edited a comment on issue #12287: Add stable nrm2 for L2_normalization
URL: https://github.com/apache/incubator-mxnet/pull/12287#issuecomment-416850121
 
 
   I traversed all the use cases in the test, but none of them gave an error. 
   ```
   @with_seed()
   def test_l2_normalization():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   nbatch = random.randint(1, 4)
   nchannel = random.randint(3, 5)
   height = random.randint(4, 6)
   check_l2_normalization((nbatch, nchannel, height), mode, dtype)
   width = random.randint(5, 7)
   check_l2_normalization((nbatch, nchannel, height, width), mode, 
dtype)
   
   #test_l2_normalization()
   
   
   #@with_seed()
   def test_l2_normalization2():
   for dtype in ['float16', 'float32', 'float64']:
   for mode in ['channel', 'spatial', 'instance']:
   for nbatch in range(1,5):
   for nchannel in range(3,6):
   for height in range(4,7):
   check_l2_normalization((nbatch, nchannel, height), 
mode, dtype)
   print((nbatch, nchannel, height), '...ok')
   for width in range(5,8):
   check_l2_normalization((nbatch, nchannel, 
height, width), mode, dtype)
   
   test_l2_normalization()
   ```
   
   I found this error every time in asnumpy in 
[http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-12287/6/pipeline](url)
   This may be a running environment issue? What environment should I use to 
look for this bug?
   @haojin2 @piiswrong @leezu @anirudh2290 @szha


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luobao-intel commented on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
luobao-intel commented on issue #12377: Flaky test: test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416861592
 
 
   This test is to validate the activation calculation in mkldnn by checking 
the gradient compared to the theano.gradient.numeric_grad. However, the 
activation gradient calculation of code based on theano is not correct with the 
input closed to zero.  Thus, flaky errors occurred when there are some 
extremely small positive numbers in the random input vector. 
   The experiment is as follows.
   
   ## experiment 1:
   
   input data :[[1, 2], [3, 0.0001]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   theano gradient :
   [[0.35466552 0.8954048 ]
[0.40476322 0.39395675]]
   
   mkldnn :
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   
   ## experiment 2:
   input data :[[1, -2], [-4, 0.0005]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   theano gradient :
   [[0.35466552 0.]
[0. 0.4248553 ]]
   
   mkldnn :
   second argumment:[[0.3546685 0.   ]
[0.0.7724642]]
   
   ## analysis
   It's easy to know that the derivative  of ReLU function is :
   if x < 0, output is 0. if x > 0, output is 1.
   
   Therefore, in the check_numeric_gradient function, the gradient of executor 
should be equal to location if the corresponding element of input data is 
positive and be 0 otherwise by element-wise. 
   The gradient based on theano is apparently false when the corresponding 
element of input data is close to zero. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luobao-intel edited a comment on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
luobao-intel edited a comment on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416861592
 
 
   This test is to validate the activation calculation in mkldnn by checking 
the gradient compared to the theano.gradient.numeric_grad. However, the 
activation gradient calculation of code based on theano is not correct with the 
input closed to zero.  Thus, flaky errors occurred when there are some 
extremely small positive numbers in the random input vector. 
   The experiment is as follows.
   
   ## experiment 1:
   
   input data :[[1, 2], [3, 0.0001]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   theano gradient :
   [[0.35466552 0.8954048 ]
[0.40476322 0.39395675]]
   
   mkldnn :
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   
   ## experiment 2:
   input data :[[1, -2], [-4, 0.0005]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   theano gradient :
   [[0.35466552 0.]
[0. 0.4248553 ]]
   
   mkldnn :
   [[0.3546685 0.   ]
[0.0.7724642]]
   
   ## analysis
   It's easy to know that the derivative  of ReLU function is :
   if x < 0, output is 0. if x > 0, output is 1.
   
   Therefore, in the check_numeric_gradient function, the gradient of executor 
should be equal to location if the corresponding element of input data is 
positive and be 0 otherwise by element-wise. 
   The gradient based on theano is apparently false when the corresponding 
element of input data is close to zero. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416874153
 
 
   Please refer the RCA in #12377 and wait a moment for the merge. 
   We need a better solution :)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] dzabraev opened a new issue #12393: Deadlock in save_checkpoint when using threading

2018-08-29 Thread GitBox
dzabraev opened a new issue #12393: Deadlock in save_checkpoint when using 
threading
URL: https://github.com/apache/incubator-mxnet/issues/12393
 
 
   ## Description
   For preparing batches I create several (100) python threads (I name this 
threads preparing-threads). Each of this thread prepares batch and appends it 
to queue. Then main process takes prepared batch and does learning. Each 
preparing-thread make call of mx.nd.array and it causes deadlock when 
main-thread makes `mx.model.save_checkpoint`. If I call mx.nd.array in main 
thread it works ok, no deadlock, but learning speed (samples/persec) too low.
   
   Can anybody prompts me, is this behaviour a bug or maybe I don't use 
mx.nd.array function in non-main python thread? I see a lot of mxnet 
documentation and couldn't find any restrictions about using `threading`.
   
   
   ## Environment info
   I'm seeing the deadlock in python interface in mxnet 1.2.0 and 1.2.1
   
   ```
   python diagnose.py
   --Python Info--
   ('Version  :', '2.7.12')
   ('Compiler :', 'GCC 5.4.0 20160609')
   ('Build:', ('default', 'Dec  4 2017 14:50:18'))
   ('Arch :', ('64bit', 'ELF'))
   Pip Info---
   ('Version  :', '10.0.1')
   ('Directory:', '/usr/local/lib/python2.7/dist-packages/pip')
   --MXNet Info---
   /usr/local/lib/python2.7/dist-packages/h5py/__init__.py:36: FutureWarning: 
Conversion of the second argument of issubdtype from `float` to `np.floating` 
is deprecated. In future, it will be treated as `np.float64 == 
np.dtype(float).type`.
 from ._conv import register_converters as _register_converters
   ('Version  :', '1.2.0')
   ('Directory:', '/usr/local/lib/python2.7/dist-packages/mxnet')
   ('Commit Hash   :', '297c64fd2ee404612aa3ecc880b940fb2538039c')
   --System Info--
   ('Platform :', 'Linux-4.4.0-87-generic-x86_64-with-Ubuntu-16.04-xenial')
   ('system   :', 'Linux')
   ('node :', '894febf28f08')
   ('release  :', '4.4.0-87-generic')
   ('version  :', '#110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'x86_64')
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):88
   On-line CPU(s) list:   0-87
   Thread(s) per core:2
   Core(s) per socket:22
   Socket(s): 2
   NUMA node(s):  2
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 79
   Model name:Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
   Stepping:  1
   CPU MHz:   2199.914
   CPU max MHz:   3600.
   CPU min MHz:   1200.
   BogoMIPS:  4403.10
   Virtualization:VT-x
   Hypervisor vendor: vertical
   Virtualization type:   full
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  56320K
   NUMA node0 CPU(s): 0-21,44-65
   NUMA node1 CPU(s): 22-43,66-87
   Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 
ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt 
tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb 
intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle 
avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc 
cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm arat pln pts
   --Network Test--
   Setting timeout: 10
   Error open MXNet: https://github.com/apache/incubator-mxnet, , DNS 
finished in 0.0553529262543 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0011 sec, LOAD: 
1.7442 sec.
   Error open FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 , DNS finished in 0.284875869751 sec.
   Error open Conda: https://repo.continuum.io/pkgs/free/, , DNS 
finished in 0.0283861160278 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.9700 sec, LOAD: 
0.7090 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 1.4856 sec, LOAD: 
1.4765 sec.
   ```
   
   ## GDB batcktrace
   
   ```
   #0  pthread_cond_wait@@GLIBC_2.3.2 () at 
../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
   #1  0x7f4ca484791c in 
std::condition_variable::wait(std::unique_lock&) () from 
/usr/lib/x86_64-linux-gnu/libstdc++.so.6
   #2  0x7f4b851826fe in 
std::condition_variable::wait
 >(std::unique_lock &, mxnet::engine::ThreadedEngine::) 
(this=0x21d3468,
   __lock=..., __p=...) at /usr/include

[GitHub] lebeg commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
lebeg commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416875237
 
 
   @pengzhao-intel Sure) But it's better to disable a flaky test first and so 
that PR's from others are not blocked and then add it again with a proper fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
lebeg commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416876126
 
 
   We have continues failures on the CI for that:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1534/pipeline
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1533/pipeline/
   
   We need an urgent merge for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel commented on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416876960
 
 
   The reference checker applied the finite difference method but the eps is 
too large for float datatype in here. 
   In @luobao-intel case, the input data is e-5, so the eps can't calculate 
correctly.
   I suggest changing eps to 1e-6. @luobao-intel will fill the PR soon.
   
   
https://github.com/apache/incubator-mxnet/blob/e2a3eef349cb6643c08a7840d8cbd43b38fedfd5/python/mxnet/test_utils.py#L716
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for R-package

2018-08-29 Thread GitBox
marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for 
R-package
URL: https://github.com/apache/incubator-mxnet/pull/12391#issuecomment-416877104
 
 
   Thank you! Could you elaborate how and when this script is getting executed? 
I guess we have to modify our test pipeline and add it there


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel edited a comment on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel edited a comment on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416876960
 
 
   The reference checker applied the finite difference method but the eps is 
too large for float datatype in here. 
   In @luobao-intel case, the input data is e-5, so the eps can't calculate 
correctly.
   I suggest changing eps to `1e-6`. @luobao-intel will fill the PR soon.
   
   
https://github.com/apache/incubator-mxnet/blob/e2a3eef349cb6643c08a7840d8cbd43b38fedfd5/python/mxnet/test_utils.py#L716
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel edited a comment on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel edited a comment on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416876960
 
 
   The reference checker applied the finite difference method but the eps is 
too large for float datatype in here. 
   In @luobao-intel case, the input data is about `xe-5`, so the eps can't 
calculate correctly.
   I suggest changing eps to `1e-6`. @luobao-intel will fill the PR soon.
   
   
https://github.com/apache/incubator-mxnet/blob/e2a3eef349cb6643c08a7840d8cbd43b38fedfd5/python/mxnet/test_utils.py#L716
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416877742
 
 
   @lebeg please see my comments on #12377.
   Could you try to change eps number and run these cases again?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luobao-intel edited a comment on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
luobao-intel edited a comment on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416861592
 
 
   This test is to validate the activation calculation in mkldnn by checking 
the gradient compared to the theano.gradient.numeric_grad. However, the 
activation gradient calculation of code  referred to theano is not correct with 
the input closed to zero.  Thus, flaky errors occurred when there are some 
extremely small positive numbers in the random input vector. 
   The experiment is as follows.
   
   ## experiment 1:
   
   input data :[[1, 2], [3, 0.0001]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   gradient calculation referred to theano :
   [[0.35466552 0.8954048 ]
[0.40476322 0.39395675]]
   
   mkldnn :
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   
   ## experiment 2:
   input data :[[1, -2], [-4, 0.0005]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   gradient calculation referred to theano :
   [[0.35466552 0.]
[0. 0.4248553 ]]
   
   mkldnn :
   [[0.3546685 0.   ]
[0.0.7724642]]
   
   ## analysis
   It's easy to know that the derivative  of ReLU function is :
   if x < 0, output is 0. if x > 0, output is 1.
   
   Therefore, in the check_numeric_gradient function, the gradient of executor 
should be equal to location if the corresponding element of input data is 
positive and be 0 otherwise by element-wise. 
   The gradient based on theano is apparently false when the corresponding 
element of input data is close to zero. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on issue #12102: site-wide social include

2018-08-29 Thread GitBox
lebeg commented on issue #12102: site-wide social include
URL: https://github.com/apache/incubator-mxnet/pull/12102#issuecomment-416878150
 
 
   ![screen shot 2018-08-29 at 10 55 
43](https://user-images.githubusercontent.com/1753787/44777152-1e72cd00-ab7a-11e8-9c89-13023d0f37fc.png)
Looks good!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luobao-intel edited a comment on issue #12377: Flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
luobao-intel edited a comment on issue #12377: Flaky test: 
test_mkldnn.test_activation
URL: 
https://github.com/apache/incubator-mxnet/issues/12377#issuecomment-416861592
 
 
   This test is to validate the activation calculation in mkldnn by checking 
the gradient compared to the theano.gradient.numeric_grad. However, the 
activation gradient calculation of code  referring to theano is not correct 
with the input closed to zero.  Thus, flaky errors occurred when there are some 
extremely small positive numbers in the random input vector. 
   The experiment is as follows.
   
   ## experiment 1:
   
   input data :[[1, 2], [3, 0.0001]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   gradient calculation referring to theano :
   [[0.35466552 0.8954048 ]
[0.40476322 0.39395675]]
   
   mkldnn :
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   
   ## experiment 2:
   input data :[[1, -2], [-4, 0.0005]]
   
   location:
   {'data':
   , '__random_proj':
   [[0.3546685  0.8954062 ]
[0.40476447 0.7724642 ]]
   }
   
   gradient calculation referring to theano :
   [[0.35466552 0.]
[0. 0.4248553 ]]
   
   mkldnn :
   [[0.3546685 0.   ]
[0.0.7724642]]
   
   ## analysis
   It's easy to know that the derivative  of ReLU function is :
   if x < 0, output is 0. if x > 0, output is 1.
   
   Therefore, in the check_numeric_gradient function, the gradient of executor 
should be equal to location if the corresponding element of input data is 
positive and be 0 otherwise by element-wise. 
   The gradient based on theano is apparently false when the corresponding 
element of input data is close to zero. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed issue #12364: Importing PyTorch when using ONNX causes a segmentation fault

2018-08-29 Thread GitBox
marcoabreu closed issue #12364: Importing PyTorch when using ONNX causes a 
segmentation fault
URL: https://github.com/apache/incubator-mxnet/issues/12364
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for R-package

2018-08-29 Thread GitBox
marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for 
R-package
URL: https://github.com/apache/incubator-mxnet/pull/12391#issuecomment-416883777
 
 
   https://github.com/apache/incubator-mxnet/blob/master/Makefile#L591


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for R-package

2018-08-29 Thread GitBox
marcoabreu commented on issue #12391: [MXNET-851] Test coverage metrics for 
R-package
URL: https://github.com/apache/incubator-mxnet/pull/12391#issuecomment-416884031
 
 
   
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/runtime_functions.sh#L738


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] taliesinb opened a new issue #12394: C API documentation often doesn't mention returned pointer lifetimes

2018-08-29 Thread GitBox
taliesinb opened a new issue #12394: C API documentation often doesn't mention 
returned pointer lifetimes
URL: https://github.com/apache/incubator-mxnet/issues/12394
 
 
   Take a function from the C API like `MXNDArrayGetShape`, which sets a 
pointer to shape data of an `NDArray`. 
   
   It is not documented at 
https://mxnet.incubator.apache.org/doxygen/c__api_8h.html#a2035651f4392d249d1b904d5eb0c3406
 how long this data lasts, where and how it as allocated, and whether the 
caller is responsible for freeing it.
   
   By chasing things down to `MXAPIThreadLocalEntry` I see that this shape 
buffer is thread-local and will last until the next call to either 
`MXNDArrayGetShape` or `MXSymbolInferShape`. That's an important fact to 
document to be able to use the API correctly! If this is documented already 
somewhere, that's good, but then a reference to this section should be included 
in a doxygen `warn` field of `MXNDArrayGetShape` etc. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213604713
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 
-Select your preferences and follow the instructions to install MXNet from 
sources.
-
-Linux
-macOS
-Windows
-
-
-
-
-
-Then select the Linux distribution:
-
-Ubuntu
-CentOS
-Others
-
-
-- **Ubuntu** for systems supporting the `apt-get`
-  package management program
-- **CentOS** for systems supporting the `yum` package
-  management program
-- **Others** for general Linux-like systems building dependencies from scratch.
-
-
-
-Install build tools and git on `Ubuntu >= 13.10` and `Debian >= 8`.
-
-```bash
-sudo apt-get update && sudo apt-get install build-essential git
-```
-
-
-
-
-
-Install build tools and git on `CentOS >= 7` and `Fedora >= 19`.
-
-```bash
-sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
-```
-
-
-
-
-
-Installing both `git` and `make` by following instructions on the websites is
-straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
-
-1. Install the 32-bit `libc` with one of the following system-specific 
commands:
-
-   ```bash
-   sudo apt-get install libc6-dev-i386 # In Ubuntu
-   sudo yum install glibc-devel.i686   # In RHEL (Red Hat Linux)
-   sudo yum install glibc-devel.i386   # In CentOS 5.8
-   sudo yum install glibc-devel.i686   # In CentOS 6/7
-   ```
-
-2. Download and extract the `gcc` source code with the prerequisites:
+ BLAS library
 
-   ```bash
-   wget http://mirrors.concertpass.com/gcc/releases/gcc-4.8.5/gcc-4.8.5.tar.gz
-   tar -zxf gcc-4.8.5.tar.gz
-   cd gcc-4.8.5
-   ./contrib/download_prerequisites
-   ```
+MXNet relies on the
+[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
+Linear Algebra Subprograms) library for numerical computations. You can install
+any one among [ATLAS](http://math-atlas.sourceforge.net/),
+[OpenBLAS](http://www.openblas.net/) and
+[MKL](https://software.intel.com/en-us/intel-mkl).
 
 Review comment:
   `[Accelerate](https://developer.apple.com/documentation/accelerate) on MAC 
OSX.`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213611178
 
 

 ##
 File path: docs/install/osx_setup.md
 ##
 @@ -102,11 +102,20 @@ If building with ```GPU``` support, add the following 
configuration to config.mk
  
 
 We have installed MXNet core library. Next, we will install MXNet interface 
package for the programming language of your choice:
+- [Python](#install-mxnet-for-python)
 - [R](#install-the-mxnet-package-for-r)
 - [Julia](#install-the-mxnet-package-for-julia)
 - [Scala](#install-the-mxnet-package-for-scala)
 - [Perl](#install-the-mxnet-package-for-perl)
 
+## Install MXNet for Python
+To install the MXNet Python binding navigate to the root of the MXNet folder 
then run the following:
+
+```bash
+$ cd python
+$ pip install -e .
 
 Review comment:
   ```
   Note that the `-e` flag is optional. It is equivalent to `--editable` and 
means that if you edit the source files, these changes will be reflected in the 
package installed.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213604318
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 
-Select your preferences and follow the instructions to install MXNet from 
sources.
-
-Linux
-macOS
-Windows
-
-
-
-
-
-Then select the Linux distribution:
-
-Ubuntu
-CentOS
-Others
-
-
-- **Ubuntu** for systems supporting the `apt-get`
-  package management program
-- **CentOS** for systems supporting the `yum` package
-  management program
-- **Others** for general Linux-like systems building dependencies from scratch.
-
-
-
-Install build tools and git on `Ubuntu >= 13.10` and `Debian >= 8`.
-
-```bash
-sudo apt-get update && sudo apt-get install build-essential git
-```
-
-
-
-
-
-Install build tools and git on `CentOS >= 7` and `Fedora >= 19`.
-
-```bash
-sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
-```
-
-
-
-
-
-Installing both `git` and `make` by following instructions on the websites is
-straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
-
-1. Install the 32-bit `libc` with one of the following system-specific 
commands:
-
-   ```bash
-   sudo apt-get install libc6-dev-i386 # In Ubuntu
-   sudo yum install glibc-devel.i686   # In RHEL (Red Hat Linux)
-   sudo yum install glibc-devel.i386   # In CentOS 5.8
-   sudo yum install glibc-devel.i686   # In CentOS 6/7
-   ```
-
-2. Download and extract the `gcc` source code with the prerequisites:
+ BLAS library
 
-   ```bash
-   wget http://mirrors.concertpass.com/gcc/releases/gcc-4.8.5/gcc-4.8.5.tar.gz
-   tar -zxf gcc-4.8.5.tar.gz
-   cd gcc-4.8.5
-   ./contrib/download_prerequisites
-   ```
+MXNet relies on the
+[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
+Linear Algebra Subprograms) library for numerical computations. You can install
 
 Review comment:
   ... library for numerical computations. Those can be extended with the 
LAPACK (Linear Algebra Package) - an additional set of mathematical functions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213610814
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -385,2146 +267,796 @@ Follow the four steps in this [docker 
documentation](https://docs.docker.com/eng
 
 If you skip this step, you need to use *sudo* each time you invoke Docker.
 
-**Step 3** Pull the MXNet docker image.
+**Step 3** Install *nvidia-docker-plugin* following the [installation 
instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation). 
*nvidia-docker-plugin* is required to enable the usage of GPUs from the docker 
containers.
 
-```bash
-$ docker pull mxnet/python # Use sudo if you skip Step 2
+**Step 4** Pull the MXNet docker image.
+
+```
+$ docker pull mxnet/python:gpu # Use sudo if you skip Step 2
 ```
 
 You can list docker images to see if mxnet/python docker image pull was 
successful.
 
-```bash
+```
 $ docker images # Use sudo if you skip Step 2
 
 REPOSITORY  TAG IMAGE IDCREATED
 SIZE
-mxnet/pythonlatest  00d026968b3c3 weeks ago
 1.41 GB
+mxnet/pythongpu 493b2683c2693 weeks ago
 4.77 GB
 ```
 
-**Step 4** Validate the installation by running simple MXNet code described 
[here](#validate-mxnet-installation).
+**Step 5** Validate the installation.
 
  
 
 
 
+Refer to the MXNet Ubuntu installation guide.
 
-Building *MXNet* from source is a 2 step process.
-1. Build the *MXNet* core shared library, `libmxnet.so`, from the C++ sources.
-2. Build the language specific bindings. Example - Python bindings, Scala 
bindings.
 
-**Minimum Requirements**
-1. [GCC 4.8](https://gcc.gnu.org/gcc-4.8/) or later to compile C++ 11.
-2. [GNU Make](https://www.gnu.org/software/make/)
+ 
+ 
+ 
+
 
-
 
-**Build the MXNet core shared library**
+
+
 
-**Step 1** Install build tools and git.
-```bash
-$ sudo apt-get update
-$ sudo apt-get install -y build-essential git
-```
+The default version of R that is installed with `apt-get` is insufficient. You 
will need to first [install R v3.4.4+ and build MXNet from 
source](ubuntu_setup.html#install-the-mxnet-package-for-r).
 
-**Step 2** Install OpenBLAS.
+After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet 
R bindings with the following, assuming that `incubator-mxnet` is the source 
directory you used to build MXNet as follows:
 
-*MXNet* uses 
[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) and 
[LAPACK](https://en.wikipedia.org/wiki/LAPACK) libraries for accelerated 
numerical computations on CPU machine. There are several flavors of BLAS/LAPACK 
libraries - [OpenBLAS](http://www.openblas.net/), 
[ATLAS](http://math-atlas.sourceforge.net/) and 
[MKL](https://software.intel.com/en-us/intel-mkl). In this step we install 
OpenBLAS. You can choose to install ATLAS or MKL.
-```bash
-$ sudo apt-get install -y libopenblas-dev liblapack-dev
 ```
+$ cd incubator-mxnet
+$ make rpkg
+```
+
+ 
 
-**Step 3** Install OpenCV.
 
-*MXNet* uses [OpenCV](http://opencv.org/) for efficient image loading and 
augmentation operations.
-```bash
-$ sudo apt-get install -y libopencv-dev
-```
+
+
+The default version of R that is installed with `apt-get` is insufficient. You 
will need to first [install R v3.4.4+ and build MXNet from 
source](ubuntu_setup.html#install-the-mxnet-package-for-r).
 
-**Step 4** Download MXNet sources and build MXNet core shared library. You can 
clone the repository as described in the following code block, or you may try 
the download links for your desired MXNet version.
+After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet 
R bindings with the following, assuming that `incubator-mxnet` is the source 
directory you used to build MXNet as follows:
 
-```bash
-$ git clone --recursive https://github.com/apache/incubator-mxnet
+```
 $ cd incubator-mxnet
-$ make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas
+$ make rpkg
 ```
 
-*Note* - USE_OPENCV and USE_BLAS are make file flags to set compilation 
options to use OpenCV and BLAS library. You can explore and use more 
compilation options in `make/config.mk`.
+ 
+ 
+
 
+
+
 
+You can use the Maven packages defined in the following `dependency` to 
include MXNet in your Scala project. Please refer to the MXNet-Scala setup guide for a detailed set of 
instructions to help you with the setup process.
 
-**Build the MXNet Python binding**
-
-**Step 1** Install prerequisites - python, setup-tools, python-pip and 
libfortran (required for Numpy).
+https://mvnrepository.com/artifact/org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu";>https://img.shields.io/badge/org.apache.mxnet-linux gpu-green.svg" 
alt="maven badge"/>
 
-```bash
-$ sudo apt-get install -y python-dev python-setuptools python-pip libgfortran3
+```html
+
+org.apache.mxnet
+mxnet-full_2.11-linu

[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213605400
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 
-Select your preferences and follow the instructions to install MXNet from 
sources.
-
-Linux
-macOS
-Windows
-
-
-
-
-
-Then select the Linux distribution:
-
-Ubuntu
-CentOS
-Others
-
-
-- **Ubuntu** for systems supporting the `apt-get`
-  package management program
-- **CentOS** for systems supporting the `yum` package
-  management program
-- **Others** for general Linux-like systems building dependencies from scratch.
-
-
-
-Install build tools and git on `Ubuntu >= 13.10` and `Debian >= 8`.
-
-```bash
-sudo apt-get update && sudo apt-get install build-essential git
-```
-
-
-
-
-
-Install build tools and git on `CentOS >= 7` and `Fedora >= 19`.
-
-```bash
-sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
-```
-
-
-
-
-
-Installing both `git` and `make` by following instructions on the websites is
-straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
-
-1. Install the 32-bit `libc` with one of the following system-specific 
commands:
-
-   ```bash
-   sudo apt-get install libc6-dev-i386 # In Ubuntu
-   sudo yum install glibc-devel.i686   # In RHEL (Red Hat Linux)
-   sudo yum install glibc-devel.i386   # In CentOS 5.8
-   sudo yum install glibc-devel.i686   # In CentOS 6/7
-   ```
-
-2. Download and extract the `gcc` source code with the prerequisites:
+ BLAS library
 
-   ```bash
-   wget http://mirrors.concertpass.com/gcc/releases/gcc-4.8.5/gcc-4.8.5.tar.gz
-   tar -zxf gcc-4.8.5.tar.gz
-   cd gcc-4.8.5
-   ./contrib/download_prerequisites
-   ```
+MXNet relies on the
+[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
+Linear Algebra Subprograms) library for numerical computations. You can install
+any one among [ATLAS](http://math-atlas.sourceforge.net/),
+[OpenBLAS](http://www.openblas.net/) and
+[MKL](https://software.intel.com/en-us/intel-mkl).
 
-3. Build `gcc` by using 10 threads and then install to `/usr/local`
 
-   ```bash
-   mkdir release && cd release
-   ../configure --prefix=/usr/local --enable-languages=c,c++
-   make -j10
-   sudo make install
-   ```
+ Optional
 
-4. Add the lib path to your configure file such as `~/.bashrc`:
+* [OpenCV](http://opencv.org/) for Image Loading and Augmentation
+* [NVDIA CUDA and cuDNN](https://developer.nvidia.com/cuda-downloads) for 
running MXNet with GPUs
 
-   ```bash
-   export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib64
-   ```
 
-
- 
+### macOS
 
-
+Refer to the [MXNet macOS setup guide](osx_setup.html) for detailed 
instructions.
 
-1. If [Microsoft Visual Studio 2015](https://www.visualstudio.com/downloads/) 
is not already installed, download and install it. You can download and install 
the free community edition.
-2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
 
-
+### Windows
 
-
+Refer to the [MXNet Windows setup guide](windows_setup.html) for detailed 
instructions.
 
-Install [Xcode](https://developer.apple.com/xcode/).
 
-
+### Ubuntu
 
- BLAS library
+Refer to the MXNet Ubuntu installation guide 
for build from source instructions as well as installation of language bindings.
 
-MXNet relies on the
-[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
-Linear Algebra Subprograms) library for numerical computations. You can install
-any one among [ATLAS](http://math-atlas.sourceforge.net/),
-[OpenBLAS](http://www.openblas.net/) and
-[MKL](https://software.intel.com/en-us/intel-mkl).
 
-
-
+### CentOS
+1. Install build tools and git on `CentOS >= 7` and `Fedora >= 19`:
 
 ```bash
-sudo apt-get install libatlas-base-dev
+sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
 ```
 
-
-
-
+2. Install Atlas:
 
 ```bash
 sudo yum install atlas-devel
 ```
 
-
-
-
-
-You can follow this link to build
-[OpenBlas from 
source](https://github.com/xianyi/OpenBLAS#installation-from-source).
-
-
-
-
-
-
-macOS users can skip this step as `xcode` ships with a BLAS library.
-
-
+### Other Linux
+Installing both `git` and `make` by following instructions on the websites is
+straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
 
 Review comment:
   Are we sure that we need instructions how to build gcc? Is there any use 
case where it can not be installed with a package manager?
   
   Say: `sudo apt install gcc-4.8` or `sudo apt install gcc-4.9`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to

[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213603619
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 Review comment:
   A C++ compiler that supports C++ 11.
   [G++ (4.8 or later)](https://gcc.gnu.org/gcc-4.8/), 
[Clang](http://clang.llvm.org/) or [Visual Studio 
2015](https://visualstudio.microsoft.com/downloads/) (on Windows) is required.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213607201
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -226,109 +148,62 @@ To build OpenCV from source code, you need the 
[cmake](https://cmake.org) librar
sudo make install
```
 
-4. Add the lib path to your configuration such as `~/.bashrc`.
+* Add the lib path to your configuration such as `~/.bashrc`.
 
```bash
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/
```
 
-
-
-
-
-
-First download and install [OpenCV](http://opencv.org/releases.html), then set
-the environment variable `OpenCV_DIR` to point to the OpenCV build directory.
-
-
-
- Optional: 
[CUDA](https://developer.nvidia.com/cuda-downloads)/[cuDNN](https://developer.nvidia.com/cudnn)
 for Nvidia GPUs
-
-MXNet is compatible with both CUDA 7.5 and 8.0. It is recommended to use cuDNN 
5.
-
-
-
-
-Install CUDA 7.5 and cuDNN 5 on Ubuntu 14.04
-
-```bash
-wget 
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
-sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
-echo "deb 
http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64
 /" | sudo tee /etc/apt/sources.list.d/nvidia-ml.list
-sudo apt-get update
-sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` 
linux-image-`uname -r`
-sudo apt-get install -y cuda libcudnn5-dev=5.0.5-1+cuda7.5
-```
-
-
-
 
 ### Build
 
-
-
-First clone the recent codes
-
+1. Clone the MXNet project.
 ```bash
-git clone --recursive https://github.com/dmlc/mxnet
+git clone --recursive https://github.com/apache/incubator-mxnet mxnet
 cd mxnet
 ```
 
-File
-[`make/config.mk`](https://github.com/dmlc/mxnet/blob/master/make/config.mk)
-contains all the compilation options. You can edit it and then `make`. There 
are
-some example build options
-
-If you want to build MXNet with C++ language binding, please make sure you 
read [Build the C++ package](#build-the-c-package) first.
-
-
+There is a configuration file for make,
+[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make`.
 
-
+To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make`.
 
-- Build without using OpenCV. `-j` runs multiple jobs against multi-core CPUs.
+Other typical configurations are:
 
-  ```bash
-  make -j USE_OPENCV=0
-  ```
+* `-j` runs multiple jobs against multi-core CPUs. Example using all cores on 
Linux:
 
-- Build with both GPU and OpenCV support
-
-  ```bash
-  make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda 
USE_CUDNN=1
-  ```
-
-
-
-
-
-- Build with the default BLAS library and clang installed with `xcode` (OPENMP
-  is disabled because it is not supported in default by clang).
+```bash
+make -j$(nproc)
+```
 
-  ```bash
-  make -j USE_BLAS=apple USE_OPENCV=0 USE_OPENMP=0
-  ```
+* Build without using OpenCV:
 
-
+```bash
+make USE_OPENCV=0
+```
 
-
+* Build with both OpenBLAS, GPU, and OpenCV support:
 
-Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build```.
+```bash
+make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1
+```
 
-In Visual Studio, open the solution file,```.sln```, and compile it.
-These commands produce a library called ```mxnet.dll``` in the 
```./build/Release/``` or ```./build/Debug``` folder.
+* Build on macOS with the default BLAS library and clang installed with 
`xcode` (OPENMP is disabled because it is not supported in default by clang):
 
 Review comment:
   To use OpenMP on MacOS you need to install the Clang compiler `brew install 
llvm` (the one provided by Apple does not support OpenMP)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213609152
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -63,314 +63,196 @@ Indicate your preferred configuration. Then, follow the 
customized commands to i
 
 
   Pip
-  Virtualenv
   Docker
   Build from 
Source
 
 
 
 
-
+
 
 
 
 
 
 
 
-
-The following installation instructions have been tested on Ubuntu 14.04 and 
16.04.
-
-
 
-
-
-**Step 1**  Install prerequisites - wget and latest pip.
-
-Installing *MXNet* with pip requires a latest version of `pip`. Install the 
latest version of `pip` by issuing the following command in the terminal.
-
-```bash
-$ sudo apt-get update
-$ sudo apt-get install -y wget python gcc
-$ wget https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py
-```
-
 
 
-**Step 2** Install MXNet with OpenBLAS acceleration.
-
-```bash
-$ pip install mxnet
-```
-
-**Step 3**  Install [Graphviz](http://www.graphviz.org/). (Optional, needed 
for graph visualization using `mxnet.viz` package).
-```bash
-sudo apt-get install graphviz
-pip install graphviz
 ```
-
-**Step 4**  Validate the installation by running simple MXNet code described 
[here](#validate-mxnet-installation).
-
-**Experimental Choice** If You would like to install mxnet with Intel MKL, try 
the experimental pip package with MKL:
-```bash
-$ pip install mxnet-mkl
+$ pip install mxnet
 
 Review comment:
   Not mxnet-mkl anymore? There are a lot of other options as well: 
https://pypi.org/search/?q=mxnet Maybe add some recommendation or at least list 
some options here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213607627
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -226,109 +148,62 @@ To build OpenCV from source code, you need the 
[cmake](https://cmake.org) librar
sudo make install
```
 
-4. Add the lib path to your configuration such as `~/.bashrc`.
+* Add the lib path to your configuration such as `~/.bashrc`.
 
```bash
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/
```
 
-
-
-
-
-
-First download and install [OpenCV](http://opencv.org/releases.html), then set
-the environment variable `OpenCV_DIR` to point to the OpenCV build directory.
-
-
-
- Optional: 
[CUDA](https://developer.nvidia.com/cuda-downloads)/[cuDNN](https://developer.nvidia.com/cudnn)
 for Nvidia GPUs
-
-MXNet is compatible with both CUDA 7.5 and 8.0. It is recommended to use cuDNN 
5.
-
-
-
-
-Install CUDA 7.5 and cuDNN 5 on Ubuntu 14.04
-
-```bash
-wget 
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
-sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
-echo "deb 
http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64
 /" | sudo tee /etc/apt/sources.list.d/nvidia-ml.list
-sudo apt-get update
-sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` 
linux-image-`uname -r`
-sudo apt-get install -y cuda libcudnn5-dev=5.0.5-1+cuda7.5
-```
-
-
-
 
 ### Build
 
-
-
-First clone the recent codes
-
+1. Clone the MXNet project.
 ```bash
-git clone --recursive https://github.com/dmlc/mxnet
+git clone --recursive https://github.com/apache/incubator-mxnet mxnet
 cd mxnet
 ```
 
-File
-[`make/config.mk`](https://github.com/dmlc/mxnet/blob/master/make/config.mk)
-contains all the compilation options. You can edit it and then `make`. There 
are
-some example build options
-
-If you want to build MXNet with C++ language binding, please make sure you 
read [Build the C++ package](#build-the-c-package) first.
-
-
+There is a configuration file for make,
+[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make`.
 
-
+To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make`.
 
-- Build without using OpenCV. `-j` runs multiple jobs against multi-core CPUs.
+Other typical configurations are:
 
-  ```bash
-  make -j USE_OPENCV=0
-  ```
+* `-j` runs multiple jobs against multi-core CPUs. Example using all cores on 
Linux:
 
-- Build with both GPU and OpenCV support
-
-  ```bash
-  make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda 
USE_CUDNN=1
-  ```
-
-
-
-
-
-- Build with the default BLAS library and clang installed with `xcode` (OPENMP
-  is disabled because it is not supported in default by clang).
+```bash
+make -j$(nproc)
+```
 
-  ```bash
-  make -j USE_BLAS=apple USE_OPENCV=0 USE_OPENMP=0
-  ```
+* Build without using OpenCV:
 
-
+```bash
+make USE_OPENCV=0
 
 Review comment:
   Maybe this would be a good place to put the BLAS libraries explanation from 
https://github.com/apache/incubator-mxnet/pull/11148. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213610179
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -385,2146 +267,796 @@ Follow the four steps in this [docker 
documentation](https://docs.docker.com/eng
 
 If you skip this step, you need to use *sudo* each time you invoke Docker.
 
-**Step 3** Pull the MXNet docker image.
+**Step 3** Install *nvidia-docker-plugin* following the [installation 
instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation). 
*nvidia-docker-plugin* is required to enable the usage of GPUs from the docker 
containers.
 
-```bash
-$ docker pull mxnet/python # Use sudo if you skip Step 2
+**Step 4** Pull the MXNet docker image.
+
+```
+$ docker pull mxnet/python:gpu # Use sudo if you skip Step 2
 ```
 
 You can list docker images to see if mxnet/python docker image pull was 
successful.
 
-```bash
+```
 $ docker images # Use sudo if you skip Step 2
 
 REPOSITORY  TAG IMAGE IDCREATED
 SIZE
-mxnet/pythonlatest  00d026968b3c3 weeks ago
 1.41 GB
+mxnet/pythongpu 493b2683c2693 weeks ago
 4.77 GB
 ```
 
-**Step 4** Validate the installation by running simple MXNet code described 
[here](#validate-mxnet-installation).
+**Step 5** Validate the installation.
 
  
 
 
 
+Refer to the MXNet Ubuntu installation guide.
 
-Building *MXNet* from source is a 2 step process.
-1. Build the *MXNet* core shared library, `libmxnet.so`, from the C++ sources.
-2. Build the language specific bindings. Example - Python bindings, Scala 
bindings.
 
-**Minimum Requirements**
-1. [GCC 4.8](https://gcc.gnu.org/gcc-4.8/) or later to compile C++ 11.
-2. [GNU Make](https://www.gnu.org/software/make/)
+ 
+ 
+ 
+
 
-
 
-**Build the MXNet core shared library**
+
+
 
-**Step 1** Install build tools and git.
-```bash
-$ sudo apt-get update
-$ sudo apt-get install -y build-essential git
-```
+The default version of R that is installed with `apt-get` is insufficient. You 
will need to first [install R v3.4.4+ and build MXNet from 
source](ubuntu_setup.html#install-the-mxnet-package-for-r).
 
-**Step 2** Install OpenBLAS.
+After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet 
R bindings with the following, assuming that `incubator-mxnet` is the source 
directory you used to build MXNet as follows:
 
-*MXNet* uses 
[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) and 
[LAPACK](https://en.wikipedia.org/wiki/LAPACK) libraries for accelerated 
numerical computations on CPU machine. There are several flavors of BLAS/LAPACK 
libraries - [OpenBLAS](http://www.openblas.net/), 
[ATLAS](http://math-atlas.sourceforge.net/) and 
[MKL](https://software.intel.com/en-us/intel-mkl). In this step we install 
OpenBLAS. You can choose to install ATLAS or MKL.
-```bash
-$ sudo apt-get install -y libopenblas-dev liblapack-dev
 ```
+$ cd incubator-mxnet
+$ make rpkg
+```
+
+ 
 
-**Step 3** Install OpenCV.
 
-*MXNet* uses [OpenCV](http://opencv.org/) for efficient image loading and 
augmentation operations.
-```bash
-$ sudo apt-get install -y libopencv-dev
-```
+
+
+The default version of R that is installed with `apt-get` is insufficient. You 
will need to first [install R v3.4.4+ and build MXNet from 
source](ubuntu_setup.html#install-the-mxnet-package-for-r).
 
-**Step 4** Download MXNet sources and build MXNet core shared library. You can 
clone the repository as described in the following code block, or you may try 
the download links for your desired MXNet version.
+After you have setup R v3.4.4+ and MXNet, you can build and install the MXNet 
R bindings with the following, assuming that `incubator-mxnet` is the source 
directory you used to build MXNet as follows:
 
-```bash
-$ git clone --recursive https://github.com/apache/incubator-mxnet
+```
 $ cd incubator-mxnet
-$ make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas
+$ make rpkg
 ```
 
-*Note* - USE_OPENCV and USE_BLAS are make file flags to set compilation 
options to use OpenCV and BLAS library. You can explore and use more 
compilation options in `make/config.mk`.
+ 
+ 
+
 
+
+
 
+You can use the Maven packages defined in the following `dependency` to 
include MXNet in your Scala project. Please refer to the MXNet-Scala setup guide for a detailed set of 
instructions to help you with the setup process.
 
-**Build the MXNet Python binding**
-
-**Step 1** Install prerequisites - python, setup-tools, python-pip and 
libfortran (required for Numpy).
+https://mvnrepository.com/artifact/org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu";>https://img.shields.io/badge/org.apache.mxnet-linux gpu-green.svg" 
alt="maven badge"/>
 
-```bash
-$ sudo apt-get install -y python-dev python-setuptools python-pip libgfortran3
+```html
+
+org.apache.mxnet
+mxnet-full_2.11-linu

[GitHub] lebeg commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
lebeg commented on a change in pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213606392
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 Review comment:
   Can we also make CMake the default (instead of GNU Make and cmake only for 
Windows)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
lebeg commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416904080
 
 
   @pengzhao-intel my intent with this PR is disabling the test, not to make it 
pass. As far as I have understood from 
https://github.com/apache/incubator-mxnet/issues/12377 @luobao-intel will 
submit a fix for the test soon and he could enable the test again in the same 
PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
lebeg commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416905521
 
 
   Would that be alright with you @pengzhao-intel @luobao-intel?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
pengzhao-intel commented on issue #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378#issuecomment-416906198
 
 
   It's fine for us. Please go ahead.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tiger-JK commented on issue #5271: Unknown initialization pattern when seting the initializer for a specific variable.

2018-08-29 Thread GitBox
tiger-JK commented on issue #5271: Unknown initialization pattern when seting 
the initializer for a specific variable.
URL: 
https://github.com/apache/incubator-mxnet/issues/5271#issuecomment-416924399
 
 
   The variable name should be end up with '_weight'


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Ishitori commented on issue #12007: Add tutorial Gotchas using NumPy

2018-08-29 Thread GitBox
Ishitori commented on issue #12007: Add tutorial Gotchas using NumPy
URL: https://github.com/apache/incubator-mxnet/pull/12007#issuecomment-416935377
 
 
   @sandeep-krishnamurthy @aaronmarkham @larroy @rahul003  thanks for the 
review. Updated the tutorial based on your comments


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sbodenstein opened a new issue #12395: C API Thread-Safety

2018-08-29 Thread GitBox
sbodenstein opened a new issue #12395: C API Thread-Safety
URL: https://github.com/apache/incubator-mxnet/issues/12395
 
 
   Applications using the C API sometimes require knowledge of the 
thread-safety of the functions in the C API. For example, it can be useful for 
a client to run `MXNDArrayWaitAll` in a separate thread so that the master 
client thread is free whilst waiting for MXNet to complete its computations. 
   
   Could the thread-safety be made official (if the implementation is 
thread-safe), which means documenting and testing thread-safety (particularly 
the waiting functions `MXNDArrayWaitAll`, `MXNDArrayWaitToWrite` and 
`MXNDArrayWaitToRead`)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 12270e0  Bump the publish timestamp.
12270e0 is described below

commit 12270e0b686edac70b518e727258f1c743b03db5
Author: mxnet-ci 
AuthorDate: Wed Aug 29 12:56:03 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..e60304f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Aug 29 12:56:03 UTC 2018



[GitHub] lebeg commented on issue #12379: Revert "Revert "Disable kvstore test (#11798)" (#12279)"

2018-08-29 Thread GitBox
lebeg commented on issue #12379: Revert "Revert "Disable kvstore test (#11798)" 
(#12279)"
URL: https://github.com/apache/incubator-mxnet/pull/12379#issuecomment-416958713
 
 
   Retriggered


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] terrytangyuan commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-29 Thread GitBox
terrytangyuan commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416962573
 
 
   Re-triggered but failed again. Seems like Python tests are failing which is 
unrelated to this PR. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #12371: V1.3.x

2018-08-29 Thread GitBox
marcoabreu closed pull request #12371: V1.3.x
URL: https://github.com/apache/incubator-mxnet/pull/12371
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #12371: V1.3.x

2018-08-29 Thread GitBox
marcoabreu commented on issue #12371: V1.3.x
URL: https://github.com/apache/incubator-mxnet/pull/12371#issuecomment-416984169
 
 
   I'm closing this PR because this leads to our CI believing that the v1.3.x 
branch is actually a PR branch opposed to a release branch. If you would like 
to cherry-pick commits, please create a branch in your own repository and 
create a pull request.
   
   @lebeg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #12378: Disabled flaky test: test_mkldnn.test_activation

2018-08-29 Thread GitBox
marcoabreu closed pull request #12378: Disabled flaky test: 
test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/pull/12378
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/python/mkl/test_mkldnn.py b/tests/python/mkl/test_mkldnn.py
index 6287bfc96fa..ba4cf3f0116 100644
--- a/tests/python/mkl/test_mkldnn.py
+++ b/tests/python/mkl/test_mkldnn.py
@@ -22,6 +22,7 @@
 import os
 import numpy as np
 import mxnet as mx
+import unittest
 from mxnet.test_utils import rand_ndarray, assert_almost_equal
 from mxnet import gluon
 from mxnet.gluon import nn
@@ -280,6 +281,7 @@ def check_pooling_training(stype):
 check_pooling_training(stype)
 
 
+@unittest.skip("Flaky test: 
https://github.com/apache/incubator-mxnet/issues/12377";)
 @with_seed()
 def test_activation():
 def check_activation_training(stype):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Disabled flaky test: test_mkldnn.test_activation (#12378)

2018-08-29 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 65c374d  Disabled flaky test: test_mkldnn.test_activation (#12378)
65c374d is described below

commit 65c374db28941c9dc57e89b45c61779a55fd3025
Author: Anton Chernov 
AuthorDate: Wed Aug 29 17:03:42 2018 +0200

Disabled flaky test: test_mkldnn.test_activation (#12378)

* Disabled flaky test: test_mkldnn.test_activation

* Revert accidental change
---
 tests/python/mkl/test_mkldnn.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tests/python/mkl/test_mkldnn.py b/tests/python/mkl/test_mkldnn.py
index 6287bfc..ba4cf3f 100644
--- a/tests/python/mkl/test_mkldnn.py
+++ b/tests/python/mkl/test_mkldnn.py
@@ -22,6 +22,7 @@ import sys
 import os
 import numpy as np
 import mxnet as mx
+import unittest
 from mxnet.test_utils import rand_ndarray, assert_almost_equal
 from mxnet import gluon
 from mxnet.gluon import nn
@@ -280,6 +281,7 @@ def test_pooling():
 check_pooling_training(stype)
 
 
+@unittest.skip("Flaky test: 
https://github.com/apache/incubator-mxnet/issues/12377";)
 @with_seed()
 def test_activation():
 def check_activation_training(stype):



[GitHub] Roshrini commented on issue #11816: Segmentation fault when Fine-tuning an ONNX model with MXNet/Gluon

2018-08-29 Thread GitBox
Roshrini commented on issue #11816: Segmentation fault when Fine-tuning an ONNX 
model with MXNet/Gluon
URL: 
https://github.com/apache/incubator-mxnet/issues/11816#issuecomment-416997479
 
 
   @lyd911 Thank you for trying out this tutorial. I ran it in my environment: 
MacOS, jupyter notebook with python 3 kernel, MXNet v1.3 and ONNX v1.2.1
   It takes lot of time but runs fine.. I didnt get any segmentation fault. Can 
you please check which ONNX version you are using? MXNet import, export 
functionality currently supports ONNX v1.2.1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed pull request #12007: Add tutorial Gotchas using NumPy

2018-08-29 Thread GitBox
sandeep-krishnamurthy closed pull request #12007: Add tutorial Gotchas using 
NumPy
URL: https://github.com/apache/incubator-mxnet/pull/12007
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md 
b/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md
new file mode 100644
index 000..c82c63edbc2
--- /dev/null
+++ b/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md
@@ -0,0 +1,168 @@
+
+# Gotchas using NumPy in Apache MXNet
+
+The goal of this tutorial is to explain some common misconceptions about using 
[NumPy](http://www.numpy.org/) arrays in Apache MXNet. We are going to explain 
why you need to minimize or completely remove usage of NumPy from your Apache 
MXNet code. We also going to show how to minimize NumPy performance impact, 
when you have to use NumPy.
+
+## Asynchronous and non-blocking nature of Apache MXNet
+
+Instead of using NumPy arrays Apache MXNet offers its own array implementation 
named 
[NDArray](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html). 
`NDArray API` was intentionally designed to be similar to `NumPy`, but there 
are differences.
+
+One key difference is in the way calculations are executed. Every `NDArray` 
manipulation in Apache MXNet is done in asynchronous, non-blocking way. That 
means, that when we write code like `c = a * b`, where both `a` and `b` are 
`NDArrays`, the function is pushed to the [Execution 
Engine](https://mxnet.incubator.apache.org/architecture/overview.html#execution-engine),
 which starts the calculation. The function immediately returns back, and the  
user thread can continue execution, despite the fact that the calculation may 
not have been completed yet. 
+
+`Execution Engine` builds the computation graph which may reorder or combine 
some calculations, but it honors dependency order: if there are other 
manipulation with `c` done later in the code, the `Execution Engine` will start 
doing them once the result of `c` is available. We don't need to write 
callbacks to start execution of subsequent code - the `Execution Engine` is 
going to do it for us. 
+
+To get the result of the computation we only need to access the resulting 
variable, and the flow of the code will be blocked until the computation 
results are assigned to the resulting variable. This behavior allows to 
increase code performance while still supporting imperative programming mode. 
+
+Refer to the [intro tutorial to 
NDArray](https://mxnet.incubator.apache.org/tutorials/basic/ndarray.html), if 
you are new to Apache MXNet and would like to learn more how to manipulate 
NDArrays.
+
+## Converting NDArray to NumPy Array blocks calculation
+
+Many people are familiar with NumPy and flexible doing tensor manipulations 
using it. `NDArray API` offers  a convinient [.asnumpy() 
method](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asnumpy)
 to cast `nd.array` to `np.array`. However, by doing this cast and using 
`np.array` for calculation, we cannot use all the goodness of `Execution 
Engine`. All manipulations done on `np.array` are blocking. Moreover, the cast 
to `np.array` itself is a blocking operation (same as 
[.asscalar()](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asscalar),
 
[.wait_to_read()](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.wait_to_read)
 and 
[.waitall()](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.waitall)).
 
+
+That means that if we have a long computation graph and, at some point, we 
want to cast the result to `np.array`, it may feel like the casting takes a lot 
of time. But what really takes this time is `Execution Engine`, which finishes 
all the async calculations we have pushed into it to get the final result, 
which then will be converted to `np.array`.
+
+Because of the blocking nature of [.asnumpy() 
method](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asnumpy),
 using it reduces the execution performance, especially if the calculations are 
done on GPU: Apache MXNet has to copy data from GPU to CPU to return 
`np.array`. 
+
+The best solution is to **make manipulations directly on NDArrays by methods 
provided in [NDArray 
API](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html)**.
+
+## NumPy operators vs. NDArray operators
+
+Despite the fact that [NDArray 
API](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html) was 
specifically designed to be similar to `NumPy`, sometimes it is not easy to 
replace existing `NumPy` computations. The main reason is that not all 
operators, that are available in `NumPy`,

[incubator-mxnet] branch master updated: Add tutorial Gotchas using NumPy (#12007)

2018-08-29 Thread skm
This is an automated email from the ASF dual-hosted git repository.

skm pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 1f0d6ba  Add tutorial Gotchas using NumPy (#12007)
1f0d6ba is described below

commit 1f0d6ba7fd313bcaf145767274cbf9c96a0febc8
Author: Sergey Sokolov 
AuthorDate: Wed Aug 29 08:43:41 2018 -0700

Add tutorial Gotchas using NumPy (#12007)

* Add tutorial Gotchas using NumPy

* Forcing build

* Code review fix

* Forcing build
---
 docs/tutorials/gluon/gotchas_numpy_in_mxnet.md | 168 +
 docs/tutorials/index.md|   1 +
 tests/tutorials/test_tutorials.py  |   3 +
 3 files changed, 172 insertions(+)

diff --git a/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md 
b/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md
new file mode 100644
index 000..c82c63e
--- /dev/null
+++ b/docs/tutorials/gluon/gotchas_numpy_in_mxnet.md
@@ -0,0 +1,168 @@
+
+# Gotchas using NumPy in Apache MXNet
+
+The goal of this tutorial is to explain some common misconceptions about using 
[NumPy](http://www.numpy.org/) arrays in Apache MXNet. We are going to explain 
why you need to minimize or completely remove usage of NumPy from your Apache 
MXNet code. We also going to show how to minimize NumPy performance impact, 
when you have to use NumPy.
+
+## Asynchronous and non-blocking nature of Apache MXNet
+
+Instead of using NumPy arrays Apache MXNet offers its own array implementation 
named 
[NDArray](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html). 
`NDArray API` was intentionally designed to be similar to `NumPy`, but there 
are differences.
+
+One key difference is in the way calculations are executed. Every `NDArray` 
manipulation in Apache MXNet is done in asynchronous, non-blocking way. That 
means, that when we write code like `c = a * b`, where both `a` and `b` are 
`NDArrays`, the function is pushed to the [Execution 
Engine](https://mxnet.incubator.apache.org/architecture/overview.html#execution-engine),
 which starts the calculation. The function immediately returns back, and the  
user thread can continue execution, despite [...]
+
+`Execution Engine` builds the computation graph which may reorder or combine 
some calculations, but it honors dependency order: if there are other 
manipulation with `c` done later in the code, the `Execution Engine` will start 
doing them once the result of `c` is available. We don't need to write 
callbacks to start execution of subsequent code - the `Execution Engine` is 
going to do it for us. 
+
+To get the result of the computation we only need to access the resulting 
variable, and the flow of the code will be blocked until the computation 
results are assigned to the resulting variable. This behavior allows to 
increase code performance while still supporting imperative programming mode. 
+
+Refer to the [intro tutorial to 
NDArray](https://mxnet.incubator.apache.org/tutorials/basic/ndarray.html), if 
you are new to Apache MXNet and would like to learn more how to manipulate 
NDArrays.
+
+## Converting NDArray to NumPy Array blocks calculation
+
+Many people are familiar with NumPy and flexible doing tensor manipulations 
using it. `NDArray API` offers  a convinient [.asnumpy() 
method](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asnumpy)
 to cast `nd.array` to `np.array`. However, by doing this cast and using 
`np.array` for calculation, we cannot use all the goodness of `Execution 
Engine`. All manipulations done on `np.array` are blocking. Moreover, the cast 
to `np.array` itself is a blo [...]
+
+That means that if we have a long computation graph and, at some point, we 
want to cast the result to `np.array`, it may feel like the casting takes a lot 
of time. But what really takes this time is `Execution Engine`, which finishes 
all the async calculations we have pushed into it to get the final result, 
which then will be converted to `np.array`.
+
+Because of the blocking nature of [.asnumpy() 
method](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html#mxnet.ndarray.NDArray.asnumpy),
 using it reduces the execution performance, especially if the calculations are 
done on GPU: Apache MXNet has to copy data from GPU to CPU to return 
`np.array`. 
+
+The best solution is to **make manipulations directly on NDArrays by methods 
provided in [NDArray 
API](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html)**.
+
+## NumPy operators vs. NDArray operators
+
+Despite the fact that [NDArray 
API](https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html) was 
specifically designed to be similar to `NumPy`, sometimes it is not easy to 
replace existing `NumPy` computations. The main reason is that not all 
operators, that are available in `NumPy`, ar

[GitHub] dzabraev commented on issue #12393: Deadlock in save_checkpoint when using threading

2018-08-29 Thread GitBox
dzabraev commented on issue #12393: Deadlock in save_checkpoint when using 
threading
URL: 
https://github.com/apache/incubator-mxnet/issues/12393#issuecomment-417007354
 
 
   I realized that I shouldn't use mx.nd.array in different threads, because it 
uses Push API.
   
   I found [here](https://mxnet.incubator.apache.org/architecture/overview.html)
   
   > Push APIs are not thread-safe. To be specific, only one thread should make 
engine API calls at a time.
   
   I think you should mention in python API documentation what exactly python 
API functions should be called only from main thread.
   
   It will be useful to make function for creating ndarray in non-block 
fashion. Because mx.nd.array very slow and this function blocks main thread 
long time.
   
   ```
   heavy_nparray = ...
   ndarr = mx.nd.array_nonblock(heavy_nparray)
   # do something
   ndarr.wait()
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nswamy commented on issue #12102: site-wide social include

2018-08-29 Thread GitBox
nswamy commented on issue #12102: site-wide social include
URL: https://github.com/apache/incubator-mxnet/pull/12102#issuecomment-417011774
 
 
   Please check the site on different resolutions, on my mac the logos look 
small


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xcgoner commented on issue #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
xcgoner commented on issue #12376: [MXNET-854] SVRG Optimization in Python 
Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#issuecomment-417014652
 
 
   @StephanieYuan The original version of SVRG could be time-consuming due to 
the computation of the full gradient at the beginning of each epoch. Could you 
also include the cheap version in your implementation:
   https://arxiv.org/abs/1511.01942


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #9557: update_on_kvstore error setting with multiple machines

2018-08-29 Thread GitBox
sandeep-krishnamurthy commented on issue #9557: update_on_kvstore error setting 
with multiple machines
URL: 
https://github.com/apache/incubator-mxnet/issues/9557#issuecomment-417026101
 
 
   @yuewu001 
   1. Trainer is now fixed and uses the same logic as in Module. [Here in 
Trainer](- 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/trainer.py#L188)
 is where you create the KVStore using 
[model._create_kvstore](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/model.py#L77).
 However, please note that, it is not recommended to use KVStore when grad is 
sparse, hence, it will be set to false when grad is sparse.
   2. To save optimizer states, you can use trainer.save_states(file_name). 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/trainer.py#L376
   
   Resolving the issue. Please reopen, if your still have questions/issues.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy closed issue #9557: update_on_kvstore error setting with multiple machines

2018-08-29 Thread GitBox
sandeep-krishnamurthy closed issue #9557: update_on_kvstore error setting with 
multiple machines
URL: https://github.com/apache/incubator-mxnet/issues/9557
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini opened a new pull request #12396: Release 1.3.x branch disabling one test

2018-08-29 Thread GitBox
Roshrini opened a new pull request #12396: Release 1.3.x branch disabling one 
test
URL: https://github.com/apache/incubator-mxnet/pull/12396
 
 
   * Disable a test that's taking longer than 10 minutes with the Python 2
 interpreter in the Straight Dope Nightly.
   
   ## Description ##
   Missed cherry-picking this on release branch
   https://github.com/apache/incubator-mxnet/pull/12326
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #12396: Release 1.3.x branch disabling one test

2018-08-29 Thread GitBox
szha closed pull request #12396: Release 1.3.x branch disabling one test
URL: https://github.com/apache/incubator-mxnet/pull/12396
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/nightly/straight_dope/test_notebooks_single_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
index a60498c8786..555b8092b39 100644
--- a/tests/nightly/straight_dope/test_notebooks_single_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
@@ -35,6 +35,7 @@
 'chapter02_supervised-learning/environment',
 'chapter03_deep-neural-networks/kaggle-gluon-kfold',
 'chapter04_convolutional-neural-networks/deep-cnns-alexnet',  # > 10 mins.
+'chapter05_recurrent-neural-networks/rnns-gluon', # > 10 mins.
 'chapter06_optimization/gd-sgd-scratch',  # Overflow warning is intended.
 'chapter06_optimization/gd-sgd-gluon',  # Overflow warning is intended.
 'chapter07_distributed-learning/multiple-gpus-scratch',
@@ -176,9 +177,6 @@ def test_lstm_scratch(self):
 def test_gru_scratch(self):
 assert 
_test_notebook('chapter05_recurrent-neural-networks/gru-scratch')
 
-def test_rnns_gluon(self):
-assert _test_notebook('chapter05_recurrent-neural-networks/rnns-gluon')
-
 # Chapter 6
 
 def test_optimization_intro(self):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] access2rohit commented on issue #12160: Remove conflicting llvm OpenMP from cmake builds

2018-08-29 Thread GitBox
access2rohit commented on issue #12160: Remove conflicting llvm OpenMP from 
cmake builds
URL: https://github.com/apache/incubator-mxnet/pull/12160#issuecomment-417030344
 
 
   @lebeg LGTM. Would definitely like to see perf numbers though


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.3.x updated: [MXAPPS-581] Disable a long test in the SD nightly. (#12326) (#12396)

2018-08-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch v1.3.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.3.x by this push:
 new 5be1eee  [MXAPPS-581] Disable a long test in the SD nightly. (#12326) 
(#12396)
5be1eee is described below

commit 5be1eeed3614ef3181b1410005cd9a142b75c8f3
Author: Roshani Nagmote 
AuthorDate: Wed Aug 29 10:07:06 2018 -0700

[MXAPPS-581] Disable a long test in the SD nightly. (#12326) (#12396)

* Disable a test that's taking longer than 10 minutes with the Python 2
  interpreter in the Straight Dope Nightly.
---
 tests/nightly/straight_dope/test_notebooks_single_gpu.py | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tests/nightly/straight_dope/test_notebooks_single_gpu.py 
b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
index a6437cd..5eeb52f 100644
--- a/tests/nightly/straight_dope/test_notebooks_single_gpu.py
+++ b/tests/nightly/straight_dope/test_notebooks_single_gpu.py
@@ -35,6 +35,7 @@ NOTEBOOKS_WHITELIST = [
 'chapter02_supervised-learning/environment',
 'chapter03_deep-neural-networks/kaggle-gluon-kfold',
 'chapter04_convolutional-neural-networks/deep-cnns-alexnet',  # > 10 mins.
+'chapter05_recurrent-neural-networks/rnns-gluon', # > 10 mins.
 'chapter06_optimization/gd-sgd-scratch',  # Overflow warning is intended.
 'chapter06_optimization/gd-sgd-gluon',  # Overflow warning is intended.
 'chapter07_distributed-learning/multiple-gpus-scratch',
@@ -177,9 +178,6 @@ class StraightDopeSingleGpuTests(unittest.TestCase):
 def test_gru_scratch(self):
 assert 
_test_notebook('chapter05_recurrent-neural-networks/gru-scratch')
 
-def test_rnns_gluon(self):
-assert _test_notebook('chapter05_recurrent-neural-networks/rnns-gluon')
-
 # Chapter 6
 
 def test_optimization_intro(self):



[GitHub] haojin2 commented on a change in pull request #12385: fixed flaky test issue for test_operator_gpu.test_convolution_grouping

2018-08-29 Thread GitBox
haojin2 commented on a change in pull request #12385: fixed flaky test issue 
for test_operator_gpu.test_convolution_grouping
URL: https://github.com/apache/incubator-mxnet/pull/12385#discussion_r213763258
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1586,7 +1586,7 @@ def check_batchnorm_training(stype):
 check_batchnorm_training('default')
 
 
-@unittest.skip("Flaky test 
https://github.com/apache/incubator-mxnet/issues/12219";)
+#@unittest.skip("Flaky test 
https://github.com/apache/incubator-mxnet/issues/12219";)
 
 Review comment:
   Please remove instead of commenting out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #12397: update C++ example so it is easier to run

2018-08-29 Thread GitBox
aaronmarkham opened a new pull request #12397: update C++ example so it is 
easier to run
URL: https://github.com/apache/incubator-mxnet/pull/12397
 
 
   ## Description ##
   This PR updates a C++ image classification example, fixing grammar, and 
adding instructions so it can be run through the first time.
   Currently, you will segfault and fail a few times until you read through all 
of the tips and troubleshoot.
   
   ## Comments
   Maybe this example should be moved into the cpp-package folder? Or, should 
the examples in the cpp-package folder be moved out to the general 
/mxnet/example folder?
   
   Maybe this should use something from the model zoo instead of linking to old 
models on data.mxnet.io?
   
   We can always do either of these later and just get this fixed up for now...
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx opened a new pull request #12398: Add support for networks with multiple outputs in ONNX exporter

2018-08-29 Thread GitBox
ptrendx opened a new pull request #12398: Add support for networks with 
multiple outputs in ONNX exporter
URL: https://github.com/apache/incubator-mxnet/pull/12398
 
 
   ## Description ##
   Add support for networks with multiple outputs to ONNX exporter
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx opened a new pull request #12399: ONNX export: Add Crop, Deconvolution and fix the default stride of Pooling to 1

2018-08-29 Thread GitBox
ptrendx opened a new pull request #12399: ONNX export: Add Crop, Deconvolution 
and fix the default stride of Pooling to 1
URL: https://github.com/apache/incubator-mxnet/pull/12399
 
 
   ## Description ##
   Add operators to ONNX exporter:
- Crop
- Deconvolution
   Fix pooling default stride to 1.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #12388: Installation 
instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213766909
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 Review comment:
   I'd be happy to do that, but so many other downstream instructions use 
`make`. Are we ready to update all of those too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #12388: Installation 
instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213767427
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 
-Select your preferences and follow the instructions to install MXNet from 
sources.
-
-Linux
-macOS
-Windows
-
-
-
-
-
-Then select the Linux distribution:
-
-Ubuntu
-CentOS
-Others
-
-
-- **Ubuntu** for systems supporting the `apt-get`
-  package management program
-- **CentOS** for systems supporting the `yum` package
-  management program
-- **Others** for general Linux-like systems building dependencies from scratch.
-
-
-
-Install build tools and git on `Ubuntu >= 13.10` and `Debian >= 8`.
-
-```bash
-sudo apt-get update && sudo apt-get install build-essential git
-```
-
-
-
-
-
-Install build tools and git on `CentOS >= 7` and `Fedora >= 19`.
-
-```bash
-sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
-```
-
-
-
-
-
-Installing both `git` and `make` by following instructions on the websites is
-straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
-
-1. Install the 32-bit `libc` with one of the following system-specific 
commands:
-
-   ```bash
-   sudo apt-get install libc6-dev-i386 # In Ubuntu
-   sudo yum install glibc-devel.i686   # In RHEL (Red Hat Linux)
-   sudo yum install glibc-devel.i386   # In CentOS 5.8
-   sudo yum install glibc-devel.i686   # In CentOS 6/7
-   ```
-
-2. Download and extract the `gcc` source code with the prerequisites:
+ BLAS library
 
-   ```bash
-   wget http://mirrors.concertpass.com/gcc/releases/gcc-4.8.5/gcc-4.8.5.tar.gz
-   tar -zxf gcc-4.8.5.tar.gz
-   cd gcc-4.8.5
-   ./contrib/download_prerequisites
-   ```
+MXNet relies on the
+[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
+Linear Algebra Subprograms) library for numerical computations. You can install
 
 Review comment:
   seems like this would be a good spot to mention mkl-dnn? Also could link to 
those instructions within the project... wdyt?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #12388: Installation 
instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213768686
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -28,184 +32,102 @@ You need C++ build tools and BLAS library to build MXNet 
shared library. If you
for Windows) to build the library.
 
 
-Select your preferences and follow the instructions to install MXNet from 
sources.
-
-Linux
-macOS
-Windows
-
-
-
-
-
-Then select the Linux distribution:
-
-Ubuntu
-CentOS
-Others
-
-
-- **Ubuntu** for systems supporting the `apt-get`
-  package management program
-- **CentOS** for systems supporting the `yum` package
-  management program
-- **Others** for general Linux-like systems building dependencies from scratch.
-
-
-
-Install build tools and git on `Ubuntu >= 13.10` and `Debian >= 8`.
-
-```bash
-sudo apt-get update && sudo apt-get install build-essential git
-```
-
-
-
-
-
-Install build tools and git on `CentOS >= 7` and `Fedora >= 19`.
-
-```bash
-sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
-```
-
-
-
-
-
-Installing both `git` and `make` by following instructions on the websites is
-straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
-
-1. Install the 32-bit `libc` with one of the following system-specific 
commands:
-
-   ```bash
-   sudo apt-get install libc6-dev-i386 # In Ubuntu
-   sudo yum install glibc-devel.i686   # In RHEL (Red Hat Linux)
-   sudo yum install glibc-devel.i386   # In CentOS 5.8
-   sudo yum install glibc-devel.i686   # In CentOS 6/7
-   ```
-
-2. Download and extract the `gcc` source code with the prerequisites:
+ BLAS library
 
-   ```bash
-   wget http://mirrors.concertpass.com/gcc/releases/gcc-4.8.5/gcc-4.8.5.tar.gz
-   tar -zxf gcc-4.8.5.tar.gz
-   cd gcc-4.8.5
-   ./contrib/download_prerequisites
-   ```
+MXNet relies on the
+[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
+Linear Algebra Subprograms) library for numerical computations. You can install
+any one among [ATLAS](http://math-atlas.sourceforge.net/),
+[OpenBLAS](http://www.openblas.net/) and
+[MKL](https://software.intel.com/en-us/intel-mkl).
 
-3. Build `gcc` by using 10 threads and then install to `/usr/local`
 
-   ```bash
-   mkdir release && cd release
-   ../configure --prefix=/usr/local --enable-languages=c,c++
-   make -j10
-   sudo make install
-   ```
+ Optional
 
-4. Add the lib path to your configure file such as `~/.bashrc`:
+* [OpenCV](http://opencv.org/) for Image Loading and Augmentation
+* [NVDIA CUDA and cuDNN](https://developer.nvidia.com/cuda-downloads) for 
running MXNet with GPUs
 
-   ```bash
-   export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib64
-   ```
 
-
- 
+### macOS
 
-
+Refer to the [MXNet macOS setup guide](osx_setup.html) for detailed 
instructions.
 
-1. If [Microsoft Visual Studio 2015](https://www.visualstudio.com/downloads/) 
is not already installed, download and install it. You can download and install 
the free community edition.
-2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
 
-
+### Windows
 
-
+Refer to the [MXNet Windows setup guide](windows_setup.html) for detailed 
instructions.
 
-Install [Xcode](https://developer.apple.com/xcode/).
 
-
+### Ubuntu
 
- BLAS library
+Refer to the MXNet Ubuntu installation guide 
for build from source instructions as well as installation of language bindings.
 
-MXNet relies on the
-[BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) (Basic
-Linear Algebra Subprograms) library for numerical computations. You can install
-any one among [ATLAS](http://math-atlas.sourceforge.net/),
-[OpenBLAS](http://www.openblas.net/) and
-[MKL](https://software.intel.com/en-us/intel-mkl).
 
-
-
+### CentOS
+1. Install build tools and git on `CentOS >= 7` and `Fedora >= 19`:
 
 ```bash
-sudo apt-get install libatlas-base-dev
+sudo yum groupinstall -y "Development Tools" && sudo yum install -y git
 ```
 
-
-
-
+2. Install Atlas:
 
 ```bash
 sudo yum install atlas-devel
 ```
 
-
-
-
-
-You can follow this link to build
-[OpenBlas from 
source](https://github.com/xianyi/OpenBLAS#installation-from-source).
-
-
-
-
-
-
-macOS users can skip this step as `xcode` ships with a BLAS library.
-
-
+### Other Linux
+Installing both `git` and `make` by following instructions on the websites is
+straightforward. Here we provide the instructions to build `gcc-4.8` from 
source codes.
 
 Review comment:
   So, with Caffe2 on CentOS, I had to build gcc from scratch. I haven't tried 
MXNet on CentOS yet, but I wonder if whoever wrote this ran into a similar 
issue.
   
   Since this is for "other linux" the `apt` instructions won't cut it. 
   My preference would be to split out a separate page for CentOS and have 
thorough, tested instructions. (maybe as a separate PR?)

--

[GitHub] szha closed pull request #12306: SoftMin Operator

2018-08-29 Thread GitBox
szha closed pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/contrib/ctc_loss-inl.h 
b/src/operator/contrib/ctc_loss-inl.h
index 72209ae286c..9380be47451 100644
--- a/src/operator/contrib/ctc_loss-inl.h
+++ b/src/operator/contrib/ctc_loss-inl.h
@@ -409,7 +409,8 @@ class CTCLossOp : public Operator {
 
 // since the input is activation before softmax and cudnn ctc takes softmax
 // apply softmax to inputs first.
-mxnet_op::Softmax(s, data.dptr_, prob.dptr_, 
data.shape_, 2, 1.0);
+mxnet_op::Softmax(
+  s, data.dptr_, prob.dptr_, data.shape_, 2, 1.0);
 
 CUDNN_CALL(cudnnCTCLoss(s->dnn_handle_,
 prob_desc_,
@@ -426,8 +427,8 @@ class CTCLossOp : public Operator {
 workspace_bytes));
 
 if (req_grad) {
-  mxnet_op::SoftmaxGrad(s,
-  prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
+  mxnet_op::SoftmaxGrad(
+s, prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
   Assign(grad, mxnet::kWriteInplace, grad * alphabet_size);
 }
   }
diff --git a/src/operator/nn/softmax-inl.h b/src/operator/nn/softmax-inl.h
index 4a19db7c36b..c063e385f63 100644
--- a/src/operator/nn/softmax-inl.h
+++ b/src/operator/nn/softmax-inl.h
@@ -51,7 +51,7 @@ struct log_softmax_fwd {
 };
 
 
-template
+template
 inline void Softmax(Stream *s, DType *in, DType *out,
 Shape shape, int axis, const DType temperature) {
   index_t M = shape[axis];
@@ -65,30 +65,37 @@ inline void Softmax(Stream *s, DType *in, DType *out,
   for (int i = 0; i < static_cast(N); ++i) {
 index_t base = unravel_dot(i, sshape, stride);
 
-DType mmax = in[base];
+DType mmax = negate ? -in[base] : in[base];
+DType val;
 for (index_t j = 1; j < M; ++j) {
-  if (mmax < in[base + j*sa]) mmax = in[base + j*sa];
+  val = negate ? -in[base + j*sa] : in[base + j*sa];
+  if (mmax < val) mmax = val;
 }
 
 DType sum = DType(0);
+DType in_val;
 // By default temperature is 1.0, and only in reinforcement training
 // users would set it to other values.
 // Adding a branch here to save the CPU 'divide-by-1' computation at 
runtime
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp(in[base + j*sa] - mmax);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp(in_val - mmax);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map(in[base + j*sa] - mmax, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map(in_val - mmax, sum);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp((in[base + j*sa] - mmax)/temperature);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp((in_val - mmax)/temperature);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map((in[base + j*sa] - mmax)/temperature, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map((in_val - mmax)/temperature, sum);
   }
 }
   }
@@ -111,7 +118,7 @@ struct log_softmax_bwd {
 };
 
 
-template
+template
 inline void SoftmaxGrad(Stream *s, DType *out, DType *ograd,
 DType *igrad, Shape shape, int axis,
 const DType temperature) {
@@ -137,12 +144,16 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 DType final_result;
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
+final_result = negate ?
+   -OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) :
+   OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
 KERNEL_ASSIGN(igrad[base + j*sa], Req, final_result);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature;
+final_result = negate ?
+   -OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature :
+   OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature;
 KERNEL_ASSIGN(igrad[base + j*sa], Req, final_result);
   }
 }
@@ -151,7 +162,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 
 
 #ifdef __CUDACC__
-template
+template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,

[incubator-mxnet] branch master updated: support softmin operator with unit test (#12306)

2018-08-29 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ba8a9d1  support softmin operator with unit test (#12306)
ba8a9d1 is described below

commit ba8a9d13e1b549d061f1933c463cfad5e7bdd7aa
Author: Hao Jin 
AuthorDate: Wed Aug 29 10:35:44 2018 -0700

support softmin operator with unit test (#12306)
---
 src/operator/contrib/ctc_loss-inl.h|  7 +--
 src/operator/nn/softmax-inl.h  | 88 --
 src/operator/nn/softmax.cc | 39 +++
 src/operator/nn/softmax.cu |  7 +++
 tests/python/unittest/test_operator.py | 24 --
 5 files changed, 123 insertions(+), 42 deletions(-)

diff --git a/src/operator/contrib/ctc_loss-inl.h 
b/src/operator/contrib/ctc_loss-inl.h
index 72209ae..9380be4 100644
--- a/src/operator/contrib/ctc_loss-inl.h
+++ b/src/operator/contrib/ctc_loss-inl.h
@@ -409,7 +409,8 @@ class CTCLossOp : public Operator {
 
 // since the input is activation before softmax and cudnn ctc takes softmax
 // apply softmax to inputs first.
-mxnet_op::Softmax(s, data.dptr_, prob.dptr_, 
data.shape_, 2, 1.0);
+mxnet_op::Softmax(
+  s, data.dptr_, prob.dptr_, data.shape_, 2, 1.0);
 
 CUDNN_CALL(cudnnCTCLoss(s->dnn_handle_,
 prob_desc_,
@@ -426,8 +427,8 @@ class CTCLossOp : public Operator {
 workspace_bytes));
 
 if (req_grad) {
-  mxnet_op::SoftmaxGrad(s,
-  prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
+  mxnet_op::SoftmaxGrad(
+s, prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
   Assign(grad, mxnet::kWriteInplace, grad * alphabet_size);
 }
   }
diff --git a/src/operator/nn/softmax-inl.h b/src/operator/nn/softmax-inl.h
index 4a19db7..c063e38 100644
--- a/src/operator/nn/softmax-inl.h
+++ b/src/operator/nn/softmax-inl.h
@@ -51,7 +51,7 @@ struct log_softmax_fwd {
 };
 
 
-template
+template
 inline void Softmax(Stream *s, DType *in, DType *out,
 Shape shape, int axis, const DType temperature) {
   index_t M = shape[axis];
@@ -65,30 +65,37 @@ inline void Softmax(Stream *s, DType *in, DType *out,
   for (int i = 0; i < static_cast(N); ++i) {
 index_t base = unravel_dot(i, sshape, stride);
 
-DType mmax = in[base];
+DType mmax = negate ? -in[base] : in[base];
+DType val;
 for (index_t j = 1; j < M; ++j) {
-  if (mmax < in[base + j*sa]) mmax = in[base + j*sa];
+  val = negate ? -in[base + j*sa] : in[base + j*sa];
+  if (mmax < val) mmax = val;
 }
 
 DType sum = DType(0);
+DType in_val;
 // By default temperature is 1.0, and only in reinforcement training
 // users would set it to other values.
 // Adding a branch here to save the CPU 'divide-by-1' computation at 
runtime
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp(in[base + j*sa] - mmax);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp(in_val - mmax);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map(in[base + j*sa] - mmax, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map(in_val - mmax, sum);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp((in[base + j*sa] - mmax)/temperature);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp((in_val - mmax)/temperature);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map((in[base + j*sa] - mmax)/temperature, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map((in_val - mmax)/temperature, sum);
   }
 }
   }
@@ -111,7 +118,7 @@ struct log_softmax_bwd {
 };
 
 
-template
+template
 inline void SoftmaxGrad(Stream *s, DType *out, DType *ograd,
 DType *igrad, Shape shape, int axis,
 const DType temperature) {
@@ -137,12 +144,16 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 DType final_result;
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
+final_result = negate ?
+   -OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) :
+   OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
 KERNEL_ASSIGN(igrad[base + j*sa], Req, final_result);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature;
+final_result = negate ?
+   -OP2::Map(ograd[base + j*

[GitHub] marcoabreu closed pull request #12379: Revert "Revert "Disable kvstore test (#11798)" (#12279)"

2018-08-29 Thread GitBox
marcoabreu closed pull request #12379: Revert "Revert "Disable kvstore test 
(#11798)" (#12279)"
URL: https://github.com/apache/incubator-mxnet/pull/12379
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/Jenkinsfile b/Jenkinsfile
index 6a93fd58641..346cb19ce46 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -900,6 +900,10 @@ core_logic: {
 }
   }
 },
+/*  Disabled due to master build failure:
+ *  
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1221/pipeline/
+ *  https://github.com/apache/incubator-mxnet/issues/11801
+
 'dist-kvstore tests CPU': {
   node(NODE_LINUX_CPU) {
 ws('workspace/it-dist-kvstore') {
@@ -911,7 +915,7 @@ core_logic: {
   }
 }
   }
-},
+}, */
 'Scala: GPU': {
   node(NODE_LINUX_GPU) {
 ws('workspace/ut-scala-gpu') {


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Revert "Revert "Disable kvstore test (#11798)" (#12279)" (#12379)

2018-08-29 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 846086d  Revert "Revert "Disable kvstore test (#11798)" (#12279)" 
(#12379)
846086d is described below

commit 846086d62805c67e00ac11e3818e4427debfd1e7
Author: Anton Chernov 
AuthorDate: Wed Aug 29 19:38:23 2018 +0200

Revert "Revert "Disable kvstore test (#11798)" (#12279)" (#12379)

This reverts commit c1a89488ef551f441dbdf1c5107694680ce1d340.
---
 Jenkinsfile | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 6a93fd5..346cb19 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -900,6 +900,10 @@ core_logic: {
 }
   }
 },
+/*  Disabled due to master build failure:
+ *  
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1221/pipeline/
+ *  https://github.com/apache/incubator-mxnet/issues/11801
+
 'dist-kvstore tests CPU': {
   node(NODE_LINUX_CPU) {
 ws('workspace/it-dist-kvstore') {
@@ -911,7 +915,7 @@ core_logic: {
   }
 }
   }
-},
+}, */
 'Scala: GPU': {
   node(NODE_LINUX_GPU) {
 ws('workspace/ut-scala-gpu') {



[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213758020
 
 

 ##
 File path: tests/python/unittest/test_contrib_svrg_optimizer.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+from mxnet.contrib.svrg_optimization.svrg_optimizer import SVRGOptimizer
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import same
+
+
+def create_network():
+mx.random.seed(42)
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+batch_size = 32
+
+di = mx.io.NDArrayIter(train_data, train_label, batch_size=batch_size, 
shuffle=True, label_name='lin_reg_label')
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2
+)
+
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False,
+force_init=False, allow_extra=False)
+
+return di, mod
+
+
+def test_init_svrg_optimizer():
+di, mod = create_network()
 
 Review comment:
   'di' is unused. Can be changed to '_'


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213770436
 
 

 ##
 File path: tests/python/unittest/test_contrib_svrg_module.py
 ##
 @@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+import mxnet as mx
+import numpy as np
+
+
+def set_up():
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+di = mx.io.NDArrayIter(train_data, train_label, batch_size=32, 
shuffle=True, label_name='lin_reg_label')
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2)
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False,
+ force_init=False, allow_extra=False)
+
+return mod
+
+
+def test_bind_module():
+mod = set_up()
+assert mod.binded == True
+assert mod._mod_aux.binded == True
+
+
+def test_module_init():
+mod = set_up()
+assert mod._mod_aux != None
+
+
+def test_module_initializer():
+def regression_model(m):
+x = mx.symbol.var("data", stype='csr')
+v = mx.symbol.var("v", shape=(m, 1), init=mx.init.Uniform(scale=.1),
+  stype='row_sparse')
+model = mx.symbol.dot(lhs=x, rhs=v)
+y = mx.symbol.Variable("label")
+model = mx.symbol.LinearRegressionOutput(data=model, label=y, 
name="out")
+return model
+
+n, m = 128, 100
 
 Review comment:
   Please add a comment explaining the values 128, 100


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213767482
 
 

 ##
 File path: example/svrg_module/train.py
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import argparse
+import mxnet as mx
+from common import create_lin_reg_network, create_logger
+from data_reader import read_year_prediction_data
+
+parser = argparse.ArgumentParser()
+parser.add_argument('-e', dest='epochs', help='number of epochs for training 
phase', type=int, required=True)
+parser.add_argument('-f', dest="updateFreq", help="update frequency for 
SVRGModule", type=int, default=2, required=True)
+parser.add_argument('-b', dest="batch_size", help="define the batch size for 
training", type=int,
+default=100, required=False)
+parser.add_argument('-m', dest='metrics', help="create eval metric", type=str, 
required=False)
+parser.add_argument('--gpus', type=str, help='list of gpus to run, e.g. 0 or 
0,2,5. empty means using cpu')
 
 Review comment:
   Default value?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213757231
 
 

 ##
 File path: tests/python/unittest/test_contrib_svrg_optimizer.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+from mxnet.contrib.svrg_optimization.svrg_optimizer import SVRGOptimizer
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import same
+
+
+def create_network():
+mx.random.seed(42)
 
 Review comment:
   Fixed seed required here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213765297
 
 

 ##
 File path: example/svrg_module/example_api_train.py
 ##
 @@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+
+
+def test_svrg_intermediate_level_api(num_epoch):
+"""Test intermediate level svrgmodule API where the training process
+need to be explicitly defined. KVstore is not explicitly created.
+"""
+di, mod = create_network()
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+kv = mx.kv.create("local")
+mod.init_optimizer(kvstore=kv, optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq) == 0:
+mod.update_full_grads(di)
+di.reset()
+for batch in di:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+mod.logger.info('Epoch[%d] Train cost=%f', e, metrics.get()[1])
+
+
+def test_svrg_high_level_api(num_epoch):
+"""Test high level svrgmodule API. KVStore is explicitly created.
+"""
+di, mod = create_network()
+mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),), num_epoch=num_epoch,
+kvstore='local')
+
+
+def create_network():
+import logging
+"""Create a linear regression network for performing SVRG optimization.
+:return: an instance of mx.io.NDArrayIter
+:return: an instance of mx.mod.svrgmodule for performing SVRG optimization
+"""
+head = '%(asctime)-15s %(message)s'
+logging.basicConfig(level=logging.INFO, format=head)
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+di = mx.io.NDArrayIter(train_data, train_label, batch_size=32, 
shuffle=True, label_name='lin_reg_label')
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2, logger=logging
+)
+
+return di, mod
+
+# run as a script
+if __name__ == "__main__":
+num_epoch = 100
 
 Review comment:
   Can this be a user-defined param?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213770806
 
 

 ##
 File path: tests/python/unittest/test_contrib_svrg_optimizer.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+from mxnet.contrib.svrg_optimization.svrg_optimizer import SVRGOptimizer
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import same
+
+
+def create_network():
+mx.random.seed(42)
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+batch_size = 32
 
 Review comment:
   Can this be user-defined?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213762082
 
 

 ##
 File path: example/svrg_module/common.py
 ##
 @@ -0,0 +1,78 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import logging
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+
+
+def create_lin_reg_network(train_features, train_labels, feature_dim, 
batch_size, update_freq, ctx, logger):
+# fit a linear regression model with mxnet SVRG
+print("Fitting linear regression with mxnet")
+train_iter = mx.io.NDArrayIter(train_features, train_labels, 
batch_size=batch_size, shuffle=True,
+   data_name='data', label_name='label')
+data = mx.sym.Variable("data")
+label = mx.sym.Variable("label")
+weight = mx.sym.Variable("fc_weight", shape=(1, feature_dim))
+net = mx.sym.dot(data, weight.transpose())
+bias = mx.sym.Variable("fc_bias", shape=(1,), wd_mult=0.0, lr_mult=10.0)
+net = mx.sym.broadcast_plus(net, bias)
+net = mx.sym.LinearRegressionOutput(data=net, label=label)
+
+mod = SVRGModule(symbol=net, context=ctx, data_names=['data'], 
label_names=['label'], logger=logger,
+ update_freq=update_freq)
+return train_iter, mod
+
+
+def create_metrics(metrics):
+metric = mx.metric.create(metrics)
+return metric
+
+
+def create_logger():
+logger = logging.getLogger('sgd_svrg')
+logger.setLevel(logging.INFO)
+formatter = logging.Formatter('%(asctime)s - %(message)s')
+fh = logging.FileHandler('experiments_lr.log')
+fh.setFormatter(formatter)
+logger.addHandler(fh)
+return logger
+
+
+def accumulate_grad(grad_dict, mod):
+param_names = mod._exec_group.param_names
+for i in range(len(param_names)):
+if param_names[i] not in grad_dict:
+grad_dict[param_names[i]] = 
mod._exec_group.grad_arrays[i][0].copy()
+else:
+grad_dict[param_names[i]] = 
mx.ndarray.concat(grad_dict[param_names[i]], mod._exec_group.grad_arrays[i][0],
+  dim=0)
+
+
+def calc_expectation(grad_dict, count):
+for key in grad_dict.keys():
+grad_dict[str.format(key+"_expectation")] = 
mx.ndarray.sum(grad_dict[key], axis=0)/count
 
 Review comment:
   Is there a chance for Divide By Zero?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213767399
 
 

 ##
 File path: example/svrg_module/train.py
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import argparse
+import mxnet as mx
+from common import create_lin_reg_network, create_logger
+from data_reader import read_year_prediction_data
+
+parser = argparse.ArgumentParser()
+parser.add_argument('-e', dest='epochs', help='number of epochs for training 
phase', type=int, required=True)
+parser.add_argument('-f', dest="updateFreq", help="update frequency for 
SVRGModule", type=int, default=2, required=True)
+parser.add_argument('-b', dest="batch_size", help="define the batch size for 
training", type=int,
+default=100, required=False)
+parser.add_argument('-m', dest='metrics', help="create eval metric", type=str, 
required=False)
 
 Review comment:
   Any default value?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213756853
 
 

 ##
 File path: python/mxnet/contrib/svrg_optimization/svrg_optimizer.py
 ##
 @@ -0,0 +1,133 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGOptimizer` encapsulates two optimizers to accommodate SVRG 
optimization logic.
+"""
+
+
+import mxnet as mx
+
+
+@mx.optimizer.register
+class AssignmentOptimizer(mx.optimizer.Optimizer):
+"""AssignmentOptimizer assigns gradients to be weights for SVRGModule full 
gradients
+accumulation in the KVStore
+"""
+def update(self, index, weight, grad, state):
+weight[:] = grad
+
+
+@mx.optimizer.register
+class SVRGOptimizer(mx.optimizer.Optimizer):
+"""SVRGOptimizer is a wrapper class for two optimizers: one for 
accumulating full gradients and the other
+one is the passed-in optimizer.
+
+Parameters
+--
+default_optimizer: optimizer passed-in when invoke on mx.mod.init_optimizer
+"""
+
+def __init__(self, default_optimizer, **kwargs):
+# Reconstruct kwargs to identify additional params for default 
optimizer
+default_param = self._check_params(**kwargs)
+super(SVRGOptimizer, self).__init__(**default_param)
+if isinstance(default_optimizer, str):
+self.default_opt = mx.optimizer.create(default_optimizer, **kwargs)
+else:
+self.default_opt = default_optimizer
+self.aux_opt = mx.optimizer.create(AssignmentOptimizer.__name__)
+
+
+def _check_params(self, **kwargs):
+optimizer_param = dict(kwargs)
+base_params = ['rescale_grad', 'param_idx2name', 'wd', 
'clip_gradient', 'learning_rate', 'lr_scheduler', 'sym',
+   'begin_num_update', 'multi_precision', 'param_dict']
+
+default_params = {}
+for key, _ in optimizer_param.items():
+if key in base_params:
+default_params[key] = optimizer_param[key]
+
+return default_params
+
+def update(self, index, weight, grad, state):
+"""Updates the given parameter using the corresponding gradient and 
state. If key contains 'full', update with
+lr = -1 otherwise will use default optimizer.
+
+Parameters
+--
+index : int
+The unique index of the parameter into the individual learning
+rates and weight decays. Learning rates and weight decay
+may be set via `set_lr_mult()` and `set_wd_mult()`, respectively.
+weight : NDArray
+The parameter to be updated.
+grad : NDArray
+The gradient of the objective with respect to this parameter.
+state : any obj
+The state returned by `create_state()`.
+"""
+
+name = self._check_index(index)
+
+if "full".lower() in name:
+self.aux_opt.update(index, weight, grad, state)
+else:
+# use the default optimizer
+self.default_opt.update(index, weight, grad, state)
+
+def create_state(self, index, weight):
+"""Creates auxiliary state for a given weight.
+Some optimizers require additional states, e.g. as momentum, in 
addition
+to gradients in order to update weights. This function creates state
+for a given weight which will be used in `update`. This function is
+called only once for each weight.
+
+Parameters
+--
+index : int
+An unique index to identify the weight.
+weight : NDArray
+The weight.
+Returns
+---
+state : any obj
+The state associated with the weight.
+"""
+
+name = self._check_index(index)
+if "full".lower() in name:
 
 Review comment:
   Here too


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@in

[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213763618
 
 

 ##
 File path: example/svrg_module/data_reader.py
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import numpy as np
+
+
+def read_year_prediction_data(fileName):
+# Download data file
+# from subprocess import call
+# call(['wget', 
'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/YearPredictionMSD.bz2'])
+# call(['bzip2', '-d', 'YearPredictionMSD.bz2'])
+
+from sklearn.datasets import load_svmlight_file
+
+feature_dim = 90
+print("Reading data from disk...")
+train_features, train_labels = load_svmlight_file(fileName, 
n_features=feature_dim, dtype=np.float32)
+train_features = train_features.todense()
+
+# normalize the data: subtract means and divide by standard deviations
+label_mean = train_labels.mean()
+label_std = np.sqrt(np.square(train_labels - label_mean).mean())
+feature_means = train_features.mean(axis=0)
+feature_stds = np.sqrt(np.square(train_features - 
feature_means).mean(axis=0))
+
+train_features = (train_features - feature_means) / feature_stds
+train_labels = (train_labels - label_mean) / label_std
 
 Review comment:
   Any chance of Divide By Zero here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213767214
 
 

 ##
 File path: example/svrg_module/train.py
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import argparse
+import mxnet as mx
+from common import create_lin_reg_network, create_logger
+from data_reader import read_year_prediction_data
+
+parser = argparse.ArgumentParser()
+parser.add_argument('-e', dest='epochs', help='number of epochs for training 
phase', type=int, required=True)
 
 Review comment:
   what is the default value?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213761306
 
 

 ##
 File path: tests/python/unittest/test_contrib_svrg_optimizer.py
 ##
 @@ -0,0 +1,101 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+from mxnet.contrib.svrg_optimization.svrg_optimizer import SVRGOptimizer
+import mxnet as mx
+import numpy as np
+from mxnet.test_utils import same
+
+
+def create_network():
+mx.random.seed(42)
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+batch_size = 32
+
+di = mx.io.NDArrayIter(train_data, train_label, batch_size=batch_size, 
shuffle=True, label_name='lin_reg_label')
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2
+)
+
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False,
+force_init=False, allow_extra=False)
+
+return di, mod
+
+
+def test_init_svrg_optimizer():
+di, mod = create_network()
+
+kv = mx.kv.create('local')
+mod.init_optimizer(kvstore=kv, optimizer='sgd', 
optimizer_params=(('learning_rate', 0.01),),
+   force_init=False)
+
+assert type(mod._optimizer).__name__ == SVRGOptimizer.__name__
+
+
+def test_svrg_optimizer_constructor():
+_, mod = create_network()
 
 Review comment:
   both _ and mod are not used here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213766331
 
 

 ##
 File path: example/svrg_module/example_inference.py
 ##
 @@ -0,0 +1,89 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+import logging
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+
+
+def test_svrg_inference(num_epoch):
+train_iter, val_iter, mod = create_network()
+mod.fit(train_iter, eval_data=val_iter, eval_metric='mse', optimizer='sgd',
+optimizer_params=(('learning_rate', 0.025),),
+num_epoch=num_epoch)
+
+def test_score(num_epoch):
+train_iter, val_iter,  mod = create_network()
+mod.bind(data_shapes=train_iter.provide_data, 
label_shapes=train_iter.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+mod.init_optimizer(kvstore='local', optimizer='nag', 
optimizer_params=(('momentum', 0.9),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq + 1) == 0:
+mod.update_full_grads(train_iter)
+train_iter.reset()
+for batch in train_iter:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+
+y = mod.predict(val_iter)
+assert y.shape == (200, 1)
 
 Review comment:
   Add a comment about why the comparison is with (200,1)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213766632
 
 

 ##
 File path: example/svrg_module/example_inference.py
 ##
 @@ -0,0 +1,89 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+import logging
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+
+
+def test_svrg_inference(num_epoch):
+train_iter, val_iter, mod = create_network()
+mod.fit(train_iter, eval_data=val_iter, eval_metric='mse', optimizer='sgd',
+optimizer_params=(('learning_rate', 0.025),),
+num_epoch=num_epoch)
+
+def test_score(num_epoch):
+train_iter, val_iter,  mod = create_network()
+mod.bind(data_shapes=train_iter.provide_data, 
label_shapes=train_iter.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+mod.init_optimizer(kvstore='local', optimizer='nag', 
optimizer_params=(('momentum', 0.9),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq + 1) == 0:
+mod.update_full_grads(train_iter)
+train_iter.reset()
+for batch in train_iter:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+
+y = mod.predict(val_iter)
+assert y.shape == (200, 1)
+score = mod.score(val_iter, ['mse'])
+print("Training Loss is %f", score[0][1])
+
+
+def create_network():
+"""Create a linear regression network for performing SVRG optimization.
+:return: an instance of mx.io.NDArrayIter
+:return: an instance of mx.mod.svrgmodule for performing SVRG optimization
+"""
+head = '%(asctime)-15s %(message)s'
+logging.basicConfig(level=logging.INFO, format=head)
+data = np.random.randint(1, 5, [1000, 2])
+n_train = int(data.shape[0] * 0.8)
+weights = np.array([1.0, 2.0])
+label = data.dot(weights)
+
+
+di = mx.io.NDArrayIter(data[:n_train, :], label[:n_train], batch_size=32, 
shuffle=True, label_name='lin_reg_label')
+val_iter = mx.io.NDArrayIter(data[n_train:, :], label[n_train:], 
batch_size=32)
+
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2
+)
+
+return di, val_iter, mod
+
+
+# run as a script
+if __name__ == "__main__":
+num_epoch = 100
 
 Review comment:
   user-defined param possible?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213762445
 
 

 ##
 File path: example/svrg_module/common.py
 ##
 @@ -0,0 +1,78 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import logging
+from mxnet.contrib.svrg_optimization.svrg_module import SVRGModule
+
+
+def create_lin_reg_network(train_features, train_labels, feature_dim, 
batch_size, update_freq, ctx, logger):
+# fit a linear regression model with mxnet SVRG
+print("Fitting linear regression with mxnet")
+train_iter = mx.io.NDArrayIter(train_features, train_labels, 
batch_size=batch_size, shuffle=True,
+   data_name='data', label_name='label')
+data = mx.sym.Variable("data")
+label = mx.sym.Variable("label")
+weight = mx.sym.Variable("fc_weight", shape=(1, feature_dim))
+net = mx.sym.dot(data, weight.transpose())
+bias = mx.sym.Variable("fc_bias", shape=(1,), wd_mult=0.0, lr_mult=10.0)
+net = mx.sym.broadcast_plus(net, bias)
+net = mx.sym.LinearRegressionOutput(data=net, label=label)
+
+mod = SVRGModule(symbol=net, context=ctx, data_names=['data'], 
label_names=['label'], logger=logger,
+ update_freq=update_freq)
+return train_iter, mod
+
+
+def create_metrics(metrics):
+metric = mx.metric.create(metrics)
+return metric
+
+
+def create_logger():
+logger = logging.getLogger('sgd_svrg')
+logger.setLevel(logging.INFO)
+formatter = logging.Formatter('%(asctime)s - %(message)s')
+fh = logging.FileHandler('experiments_lr.log')
+fh.setFormatter(formatter)
+logger.addHandler(fh)
+return logger
+
+
+def accumulate_grad(grad_dict, mod):
+param_names = mod._exec_group.param_names
+for i in range(len(param_names)):
+if param_names[i] not in grad_dict:
+grad_dict[param_names[i]] = 
mod._exec_group.grad_arrays[i][0].copy()
+else:
+grad_dict[param_names[i]] = 
mx.ndarray.concat(grad_dict[param_names[i]], mod._exec_group.grad_arrays[i][0],
+  dim=0)
+
+
+def calc_expectation(grad_dict, count):
+for key in grad_dict.keys():
+grad_dict[str.format(key+"_expectation")] = 
mx.ndarray.sum(grad_dict[key], axis=0)/count
+
+return grad_dict
+
+
+def calc_variance(grad_dict, count, param_names):
+for i in range(len(param_names)):
+diff_sqr = mx.ndarray.square(mx.nd.subtract(grad_dict[param_names[i]],
+
grad_dict[str.format(param_names[i]+"_expectation")]))
+grad_dict[str.format(param_names[i] + "_variance")] = 
mx.ndarray.sum(diff_sqr, axis=0) / count
 
 Review comment:
   Divide By Zero here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213763376
 
 

 ##
 File path: example/svrg_module/data_reader.py
 ##
 @@ -0,0 +1,44 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import numpy as np
+
+
+def read_year_prediction_data(fileName):
+# Download data file
+# from subprocess import call
+# call(['wget', 
'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/YearPredictionMSD.bz2'])
+# call(['bzip2', '-d', 'YearPredictionMSD.bz2'])
+
+from sklearn.datasets import load_svmlight_file
+
+feature_dim = 90
 
 Review comment:
   Please add a comment referring to the source for this number 90.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-29 Thread GitBox
vandanavk commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213756786
 
 

 ##
 File path: python/mxnet/contrib/svrg_optimization/svrg_optimizer.py
 ##
 @@ -0,0 +1,133 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGOptimizer` encapsulates two optimizers to accommodate SVRG 
optimization logic.
+"""
+
+
+import mxnet as mx
+
+
+@mx.optimizer.register
+class AssignmentOptimizer(mx.optimizer.Optimizer):
+"""AssignmentOptimizer assigns gradients to be weights for SVRGModule full 
gradients
+accumulation in the KVStore
+"""
+def update(self, index, weight, grad, state):
+weight[:] = grad
+
+
+@mx.optimizer.register
+class SVRGOptimizer(mx.optimizer.Optimizer):
+"""SVRGOptimizer is a wrapper class for two optimizers: one for 
accumulating full gradients and the other
+one is the passed-in optimizer.
+
+Parameters
+--
+default_optimizer: optimizer passed-in when invoke on mx.mod.init_optimizer
+"""
+
+def __init__(self, default_optimizer, **kwargs):
+# Reconstruct kwargs to identify additional params for default 
optimizer
+default_param = self._check_params(**kwargs)
+super(SVRGOptimizer, self).__init__(**default_param)
+if isinstance(default_optimizer, str):
+self.default_opt = mx.optimizer.create(default_optimizer, **kwargs)
+else:
+self.default_opt = default_optimizer
+self.aux_opt = mx.optimizer.create(AssignmentOptimizer.__name__)
+
+
+def _check_params(self, **kwargs):
+optimizer_param = dict(kwargs)
+base_params = ['rescale_grad', 'param_idx2name', 'wd', 
'clip_gradient', 'learning_rate', 'lr_scheduler', 'sym',
+   'begin_num_update', 'multi_precision', 'param_dict']
+
+default_params = {}
+for key, _ in optimizer_param.items():
+if key in base_params:
+default_params[key] = optimizer_param[key]
+
+return default_params
+
+def update(self, index, weight, grad, state):
+"""Updates the given parameter using the corresponding gradient and 
state. If key contains 'full', update with
+lr = -1 otherwise will use default optimizer.
+
+Parameters
+--
+index : int
+The unique index of the parameter into the individual learning
+rates and weight decays. Learning rates and weight decay
+may be set via `set_lr_mult()` and `set_wd_mult()`, respectively.
+weight : NDArray
+The parameter to be updated.
+grad : NDArray
+The gradient of the objective with respect to this parameter.
+state : any obj
+The state returned by `create_state()`.
+"""
+
+name = self._check_index(index)
+
+if "full".lower() in name:
 
 Review comment:
   is .lower() required here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseth10 commented on issue #12219: flaky test: test_operator_gpu.test_convolution_grouping

2018-08-29 Thread GitBox
mseth10 commented on issue #12219: flaky test: 
test_operator_gpu.test_convolution_grouping
URL: 
https://github.com/apache/incubator-mxnet/issues/12219#issuecomment-417042576
 
 
   Fix in #12385 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #12388: Installation 
instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213788955
 
 

 ##
 File path: docs/install/build_from_source.md
 ##
 @@ -226,109 +148,62 @@ To build OpenCV from source code, you need the 
[cmake](https://cmake.org) librar
sudo make install
```
 
-4. Add the lib path to your configuration such as `~/.bashrc`.
+* Add the lib path to your configuration such as `~/.bashrc`.
 
```bash
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/
```
 
-
-
-
-
-
-First download and install [OpenCV](http://opencv.org/releases.html), then set
-the environment variable `OpenCV_DIR` to point to the OpenCV build directory.
-
-
-
- Optional: 
[CUDA](https://developer.nvidia.com/cuda-downloads)/[cuDNN](https://developer.nvidia.com/cudnn)
 for Nvidia GPUs
-
-MXNet is compatible with both CUDA 7.5 and 8.0. It is recommended to use cuDNN 
5.
-
-
-
-
-Install CUDA 7.5 and cuDNN 5 on Ubuntu 14.04
-
-```bash
-wget 
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
-sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
-echo "deb 
http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1404/x86_64
 /" | sudo tee /etc/apt/sources.list.d/nvidia-ml.list
-sudo apt-get update
-sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` 
linux-image-`uname -r`
-sudo apt-get install -y cuda libcudnn5-dev=5.0.5-1+cuda7.5
-```
-
-
-
 
 ### Build
 
-
-
-First clone the recent codes
-
+1. Clone the MXNet project.
 ```bash
-git clone --recursive https://github.com/dmlc/mxnet
+git clone --recursive https://github.com/apache/incubator-mxnet mxnet
 cd mxnet
 ```
 
-File
-[`make/config.mk`](https://github.com/dmlc/mxnet/blob/master/make/config.mk)
-contains all the compilation options. You can edit it and then `make`. There 
are
-some example build options
-
-If you want to build MXNet with C++ language binding, please make sure you 
read [Build the C++ package](#build-the-c-package) first.
-
-
+There is a configuration file for make,
+[`make/config.mk`](https://github.com/apache/incubator-mxnet/blob/master/make/config.mk),
 that contains all the compilation options. You can edit it and then run `make`.
 
-
+To enable C++ package, just add `USE_CPP_PACKAGE=1` when you run `make`.
 
-- Build without using OpenCV. `-j` runs multiple jobs against multi-core CPUs.
+Other typical configurations are:
 
-  ```bash
-  make -j USE_OPENCV=0
-  ```
+* `-j` runs multiple jobs against multi-core CPUs. Example using all cores on 
Linux:
 
-- Build with both GPU and OpenCV support
-
-  ```bash
-  make -j USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda 
USE_CUDNN=1
-  ```
-
-
-
-
-
-- Build with the default BLAS library and clang installed with `xcode` (OPENMP
-  is disabled because it is not supported in default by clang).
+```bash
+make -j$(nproc)
+```
 
-  ```bash
-  make -j USE_BLAS=apple USE_OPENCV=0 USE_OPENMP=0
-  ```
+* Build without using OpenCV:
 
-
+```bash
+make USE_OPENCV=0
 
 Review comment:
   This caused me to rearrange the page for a better flow. This is all good 
info. It should be double-checked in case it is only for cmake and won't work 
for make.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #12018: onnx converter error

2018-08-29 Thread GitBox
Roshrini commented on issue #12018: onnx converter error
URL: 
https://github.com/apache/incubator-mxnet/issues/12018#issuecomment-417059622
 
 
   This issue for BatchNorm operator is fixed in latest code. User is using old 
code. Ran both opset 6 and 7 of shufflenet model. It works fine. Cannot 
reproduce the issue. Added comment on forum: 
https://discuss.mxnet.io/t/error-in-loading-pretrained-shufflenet-onnx-model/1520/3?u=roshrini


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #12388: Installation 
instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#discussion_r213789828
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -63,314 +63,196 @@ Indicate your preferred configuration. Then, follow the 
customized commands to i
 
 
   Pip
-  Virtualenv
   Docker
   Build from 
Source
 
 
 
 
-
+
 
 
 
 
 
 
 
-
-The following installation instructions have been tested on Ubuntu 14.04 and 
16.04.
-
-
 
-
-
-**Step 1**  Install prerequisites - wget and latest pip.
-
-Installing *MXNet* with pip requires a latest version of `pip`. Install the 
latest version of `pip` by issuing the following command in the terminal.
-
-```bash
-$ sudo apt-get update
-$ sudo apt-get install -y wget python gcc
-$ wget https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py
-```
-
 
 
-**Step 2** Install MXNet with OpenBLAS acceleration.
-
-```bash
-$ pip install mxnet
-```
-
-**Step 3**  Install [Graphviz](http://www.graphviz.org/). (Optional, needed 
for graph visualization using `mxnet.viz` package).
-```bash
-sudo apt-get install graphviz
-pip install graphviz
 ```
-
-**Step 4**  Validate the installation by running simple MXNet code described 
[here](#validate-mxnet-installation).
-
-**Experimental Choice** If You would like to install mxnet with Intel MKL, try 
the experimental pip package with MKL:
-```bash
-$ pip install mxnet-mkl
+$ pip install mxnet
 
 Review comment:
   
![2018-08-29_11-35-04](https://user-images.githubusercontent.com/5974205/44807904-998ab200-ab7f-11e8-8b90-66426b2eeec6.png)
   
   I added the link to PyPI as a "footer" for every pip install instruction. 
That way users can look for them. I could also put in a footer note that adding 
*-mkl to a package would get them the experimental mkl support that would 
be good?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor handling BLAS libraries with cmake

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor 
handling BLAS libraries with cmake
URL: https://github.com/apache/incubator-mxnet/pull/11148#discussion_r213774786
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -39,7 +49,123 @@ mxnet_option(INSTALL_EXAMPLES "Install the example 
source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 mxnet_option(USE_TENSORRT "Enable infeference optimization with 
TensorRT." OFF)
 
-message(STATUS "CMAKE_SYSTEM_NAME ${CMAKE_SYSTEM_NAME}")
+if(NOT mxnet_LINKER_LIBS)
+  set(mxnet_LINKER_LIBS "")
+endif(NOT mxnet_LINKER_LIBS)
+
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  execute_process(COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
+
+set(CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
+
+SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
+
+if("$ENV{VERBOSE}" STREQUAL "1")
+  message(STATUS " Verbose Makefile ACTIVATED")
+  set(CMAKE_VERBOSE_MAKEFILE ON)
+endif()
+
+# ---[ BLAS
+
+# Choose BLAS (Basic Linear Algebra Subprograms) computation libraries
+
+# MXNet supports multiple mathematical backends for computations on the CPU:
+#
+# * Atlas
+# * OpenBLAS
+# * MKL (MKL, MKLML)
+# * MKLDNN
+# * Apple Accelerate
+#
+# The default order of choice for the libraries if found follows the path from 
the most
+# (recommended) to less performant backends. The order is as follows:
+#
+# For desktop platforms (x86_64):
+#
+# 1. MKLDNN (submodule) | USE_MKLDNN
+# 2. MKL | USE_MKL_IF_AVAILABLE
+# 3. MKLML (downloaded) | USE_MKLML
+# 4. Apple Accelerate | USE_APPLE_ACCELERATE_IF_AVAILABLE | Mac only
+# 5. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple
+#
+# Note: If USE_MKL_IF_AVAILABLE is set to False then MKLML and MKLDNN will be 
disabled as well for configuration
+# backwards compatibility.
+#
+# For embedded platforms (all other and if cross compiled):
+#
+# 1. OpenBLAS | BLAS | Options: Atlas, Open
 
 Review comment:
   wouldn't this include MKL and Apple in the options?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor handling BLAS libraries with cmake

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor 
handling BLAS libraries with cmake
URL: https://github.com/apache/incubator-mxnet/pull/11148#discussion_r213775462
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -39,7 +49,123 @@ mxnet_option(INSTALL_EXAMPLES "Install the example 
source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 mxnet_option(USE_TENSORRT "Enable infeference optimization with 
TensorRT." OFF)
 
-message(STATUS "CMAKE_SYSTEM_NAME ${CMAKE_SYSTEM_NAME}")
+if(NOT mxnet_LINKER_LIBS)
+  set(mxnet_LINKER_LIBS "")
+endif(NOT mxnet_LINKER_LIBS)
+
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  execute_process(COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
+
+set(CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
+
+SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
+
+if("$ENV{VERBOSE}" STREQUAL "1")
+  message(STATUS " Verbose Makefile ACTIVATED")
+  set(CMAKE_VERBOSE_MAKEFILE ON)
+endif()
+
+# ---[ BLAS
+
+# Choose BLAS (Basic Linear Algebra Subprograms) computation libraries
+
+# MXNet supports multiple mathematical backends for computations on the CPU:
+#
+# * Atlas
+# * OpenBLAS
+# * MKL (MKL, MKLML)
+# * MKLDNN
+# * Apple Accelerate
+#
+# The default order of choice for the libraries if found follows the path from 
the most
+# (recommended) to less performant backends. The order is as follows:
+#
+# For desktop platforms (x86_64):
+#
+# 1. MKLDNN (submodule) | USE_MKLDNN
+# 2. MKL | USE_MKL_IF_AVAILABLE
+# 3. MKLML (downloaded) | USE_MKLML
+# 4. Apple Accelerate | USE_APPLE_ACCELERATE_IF_AVAILABLE | Mac only
+# 5. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple
+#
+# Note: If USE_MKL_IF_AVAILABLE is set to False then MKLML and MKLDNN will be 
disabled as well for configuration
+# backwards compatibility.
+#
+# For embedded platforms (all other and if cross compiled):
+#
+# 1. OpenBLAS | BLAS | Options: Atlas, Open
+#
+# You can set the BLAS library explicitly by setting the BLAS variable to:
+#
+# * Atlas
+# * Open
+# * MKL
+# * Apple
 
 Review comment:
   so there's apple for embedded? How does this relate to the earlier mentioned 
BLAS= USE_APPLE_ACCELERATE_IF_AVAILABLE


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor handling BLAS libraries with cmake

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor 
handling BLAS libraries with cmake
URL: https://github.com/apache/incubator-mxnet/pull/11148#discussion_r213776609
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -39,7 +49,123 @@ mxnet_option(INSTALL_EXAMPLES "Install the example 
source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 mxnet_option(USE_TENSORRT "Enable infeference optimization with 
TensorRT." OFF)
 
-message(STATUS "CMAKE_SYSTEM_NAME ${CMAKE_SYSTEM_NAME}")
+if(NOT mxnet_LINKER_LIBS)
+  set(mxnet_LINKER_LIBS "")
+endif(NOT mxnet_LINKER_LIBS)
+
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  execute_process(COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
+
+set(CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
+
+SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
+
+if("$ENV{VERBOSE}" STREQUAL "1")
+  message(STATUS " Verbose Makefile ACTIVATED")
+  set(CMAKE_VERBOSE_MAKEFILE ON)
+endif()
+
+# ---[ BLAS
+
+# Choose BLAS (Basic Linear Algebra Subprograms) computation libraries
+
+# MXNet supports multiple mathematical backends for computations on the CPU:
+#
+# * Atlas
+# * OpenBLAS
+# * MKL (MKL, MKLML)
+# * MKLDNN
+# * Apple Accelerate
+#
+# The default order of choice for the libraries if found follows the path from 
the most
+# (recommended) to less performant backends. The order is as follows:
+#
+# For desktop platforms (x86_64):
+#
+# 1. MKLDNN (submodule) | USE_MKLDNN
+# 2. MKL | USE_MKL_IF_AVAILABLE
+# 3. MKLML (downloaded) | USE_MKLML
+# 4. Apple Accelerate | USE_APPLE_ACCELERATE_IF_AVAILABLE | Mac only
+# 5. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple
+#
+# Note: If USE_MKL_IF_AVAILABLE is set to False then MKLML and MKLDNN will be 
disabled as well for configuration
+# backwards compatibility.
+#
+# For embedded platforms (all other and if cross compiled):
+#
+# 1. OpenBLAS | BLAS | Options: Atlas, Open
+#
+# You can set the BLAS library explicitly by setting the BLAS variable to:
+#
+# * Atlas
+# * Open
+# * MKL
+# * Apple
+#
+# See cmake/ChooseBLAS.cmake file for the options.
+#
+# Intel's MKL (Math Kernel Library) is one of the most powerful math libraries
+# https://software.intel.com/en-us/mkl
+#
+# It has following flavours:
+#
+# * MKL is a full library, containing all the functionality. It is free under
 
 Review comment:
   all the functionality?
   This is too broad. Compared to what?
   How about:
   MKL is a complete math library, containing all the functionality found in 
ATLAS, OpenBlas and LAPACK


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor handling BLAS libraries with cmake

2018-08-29 Thread GitBox
aaronmarkham commented on a change in pull request #11148: [MXNET-679] Refactor 
handling BLAS libraries with cmake
URL: https://github.com/apache/incubator-mxnet/pull/11148#discussion_r213777091
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -39,7 +49,123 @@ mxnet_option(INSTALL_EXAMPLES "Install the example 
source files." OFF)
 mxnet_option(USE_SIGNAL_HANDLER   "Print stack traces on segfaults." OFF)
 mxnet_option(USE_TENSORRT "Enable infeference optimization with 
TensorRT." OFF)
 
-message(STATUS "CMAKE_SYSTEM_NAME ${CMAKE_SYSTEM_NAME}")
+if(NOT mxnet_LINKER_LIBS)
+  set(mxnet_LINKER_LIBS "")
+endif(NOT mxnet_LINKER_LIBS)
+
+if(MSVC)
+  set(SYSTEM_ARCHITECTURE x86_64)
+else()
+  execute_process(COMMAND uname -m COMMAND tr -d '\n' OUTPUT_VARIABLE 
SYSTEM_ARCHITECTURE)
+endif()
+
+set(CMAKE_MODULE_PATH 
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules;${CMAKE_MODULE_PATH}")
+
+SET(EXTRA_OPERATORS "" CACHE PATH "EXTRA OPERATORS PATH")
+
+if("$ENV{VERBOSE}" STREQUAL "1")
+  message(STATUS " Verbose Makefile ACTIVATED")
+  set(CMAKE_VERBOSE_MAKEFILE ON)
+endif()
+
+# ---[ BLAS
+
+# Choose BLAS (Basic Linear Algebra Subprograms) computation libraries
+
+# MXNet supports multiple mathematical backends for computations on the CPU:
+#
+# * Atlas
+# * OpenBLAS
+# * MKL (MKL, MKLML)
+# * MKLDNN
+# * Apple Accelerate
+#
+# The default order of choice for the libraries if found follows the path from 
the most
+# (recommended) to less performant backends. The order is as follows:
+#
+# For desktop platforms (x86_64):
+#
+# 1. MKLDNN (submodule) | USE_MKLDNN
+# 2. MKL | USE_MKL_IF_AVAILABLE
+# 3. MKLML (downloaded) | USE_MKLML
+# 4. Apple Accelerate | USE_APPLE_ACCELERATE_IF_AVAILABLE | Mac only
+# 5. OpenBLAS | BLAS | Options: Atlas, Open, MKL, Apple
+#
+# Note: If USE_MKL_IF_AVAILABLE is set to False then MKLML and MKLDNN will be 
disabled as well for configuration
+# backwards compatibility.
+#
+# For embedded platforms (all other and if cross compiled):
+#
+# 1. OpenBLAS | BLAS | Options: Atlas, Open
+#
+# You can set the BLAS library explicitly by setting the BLAS variable to:
+#
+# * Atlas
+# * Open
+# * MKL
+# * Apple
+#
+# See cmake/ChooseBLAS.cmake file for the options.
+#
+# Intel's MKL (Math Kernel Library) is one of the most powerful math libraries
+# https://software.intel.com/en-us/mkl
+#
+# It has following flavours:
+#
+# * MKL is a full library, containing all the functionality. It is free under
+#   community support licensing 
(https://software.intel.com/en-us/articles/free-mkl),
+#   but needs to be downloaded and installed manually.
+#
+# * MKLML is a subset of MKL. It contains smaller number of functions to 
reduce the
 
 Review comment:
   a smaller
   the user


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #12388: Installation instructions consolidation

2018-08-29 Thread GitBox
aaronmarkham commented on issue #12388: Installation instructions consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388#issuecomment-417062718
 
 
   @lebeg thanks for the review! I've address most of you comments. Please take 
a look at the build from source page again since I made a lot of changes after 
incorporating your cmake documentation on BLAS. That was all really good stuff 
to add here!
   
   A big update would be to switch using cmake as the default and change all of 
the examples, but I think we should hold off until your cmake PR is merged, 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >