[GitHub] jasonyu1996 opened a new issue #12390: [Feature Request] dtype for modules

2018-08-28 Thread GitBox
jasonyu1996 opened a new issue #12390: [Feature Request] dtype for modules
URL: https://github.com/apache/incubator-mxnet/issues/12390
 
 
   It seems that only a very small number of modules among all the modules 
available in the library supports specifying a `dtype` in their constructors 
(including `Dense` and `Embedding`). If not specified, the default parameters 
would be `float32`, which could be a troublesome limitation in some cases.
   
   On the forum: https://discuss.mxnet.io/t/default-datatype/1762


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416130369
 
 
   the machines are already sshable without password between each other 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416125937
 
 
   Thank U, But if installed mxnet by pip, How to use python launch.py since I 
didn't download the mxnet source code? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416130112
 
 
   I download the mxnet source code, but didn't compile since i used the mxnet 
installed by pip. when i use the launch.py provided in the mxnet source code, 
some errors shows up:
   
   xiaomin.wu@iva0605:/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test$ 
python 
/autofs/data56/public/xiaomin.wu/software/incubator-mxnet-1.2.0/tools/launch.py 
-n 2 -s 2 -H hosts --sync-dst-dir /home/xiaomin.wu/cifar10_dist --launcher ssh 
"python /autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py"
   Can't load dmlc_tracker package.  Perhaps you need to run
   git submodule update --init --recursive
   Traceback (most recent call last):
 File 
"/autofs/data56/public/xiaomin.wu/software/incubator-mxnet-1.2.0/tools/launch.py",
 line 128, in 
   main()
 File 
"/autofs/data56/public/xiaomin.wu/software/incubator-mxnet-1.2.0/tools/launch.py",
 line 96, in main
   args = dmlc_opts(args)
 File 
"/autofs/data56/public/xiaomin.wu/software/incubator-mxnet-1.2.0/tools/launch.py",
 line 48, in dmlc_opts
   from dmlc_tracker import opts
   ImportError: No module named dmlc_tracker


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416132996
 
 
   I don't think so because I tried:
   python 
/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/mxnet/tools/launch.py -n 
2 -s 2 -H hosts --sync-dst-dir /home/xiaomin.wu/cifar10_dist --launcher ssh 
"python /autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py"
   
   the same errors show up


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416146366
 
 
   I update the version to 1.2.1 by pip, but new error shows up:(
   
   xiaomin.wu@iva0605:/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test$ 
python 
/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/mxnet/tools/launch.py -n 
2 -s 2 -H hosts --sync-dst-dir /home/xiaomin.wu/cifar10_dist --launcher ssh 
"python /autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py"
   2018-08-27 15:54:12,883 INFO rsync 
/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/ -> 
10.14.6.5:/home/xiaomin.wu/cifar10_dist
   xiaomin.wu@10.14.6.5's password: 
/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/h5py/__init__.py:36: 
FutureWarning: Conversion of the second argument of issubdtype from `float` to 
`np.floating` is deprecated. In future, it will be treated as `np.float64 == 
np.dtype(float).type`.
 from ._conv import register_converters as _register_converters
   
   2018-08-27 15:54:18,462 INFO rsync 
/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/ -> 
10.14.6.8:/home/xiaomin.wu/cifar10_dist
   xiaomin.wu@10.14.6.5's password: xiaomin.wu@10.14.6.5's password: Traceback 
(most recent call last):
 File 
"/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py", line 
23, in 
   import mxnet as mx
   ImportError: No module named mxnet
   Traceback (most recent call last):
 File 
"/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py", line 
23, in 
   import mxnet as mx
   ImportError: No module named mxnet
   Exception in thread Thread-5:
   Traceback (most recent call last):
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/threading.py", line 801, in 
__bootstrap_inner
   self.run()
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/threading.py", line 754, in 
run
   self.__target(*self.__args, **self.__kwargs)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/ssh.py", 
line 61, in run
   subprocess.check_call(prog, shell = True)
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/subprocess.py", line 186, 
in check_call
   raise CalledProcessError(retcode, cmd)
   CalledProcessError: Command 'ssh -o StrictHostKeyChecking=no 10.14.6.8 -p 22 
'export 
LD_LIBRARY_PATH=/usr/local/cuda/lib64::/usr/local/cuda-8.0/lib64:~/TensorRT-4.0.0.3/lib;
 export DMLC_ROLE=worker; export DMLC_PS_ROOT_PORT=9093; export 
DMLC_PS_ROOT_URI=10.14.6.5; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; 
cd /home/xiaomin.wu/cifar10_dist; python 
/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py'' 
returned non-zero exit status 1
   Exception in thread Thread-3:
   Traceback (most recent call last):
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/threading.py", line 801, in 
__bootstrap_inner
   self.run()
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/threading.py", line 754, in 
run
   self.__target(*self.__args, **self.__kwargs)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/ssh.py", 
line 61, in run
   subprocess.check_call(prog, shell = True)
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/subprocess.py", line 186, 
in check_call
   raise CalledProcessError(retcode, cmd)
   CalledProcessError: Command 'ssh -o StrictHostKeyChecking=no 10.14.6.8 -p 22 
'export 
LD_LIBRARY_PATH=/usr/local/cuda/lib64::/usr/local/cuda-8.0/lib64:~/TensorRT-4.0.0.3/lib;
 export DMLC_ROLE=server; export DMLC_PS_ROOT_PORT=9093; export 
DMLC_PS_ROOT_URI=10.14.6.5; export DMLC_NUM_SERVER=2; export DMLC_NUM_WORKER=2; 
cd /home/xiaomin.wu/cifar10_dist; python 
/autofs/data56/public/xiaomin.wu/code/dist_mxnet_test/cifar10_dist.py'' 
returned non-zero exit status 1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416823724
 
 
   @eric-haibin-lin 
我可以从a无密码ssh到b,也可以从b无密码ssh到a,但不能a无密码ssh到a,b也不能无密码ssh到b。而问题应该正是如此


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416133087
 
 
   the vesion of the mxnet i installed buy pip is 1.2.0


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee removed a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee removed a comment on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416158008
 
 
   Permission denied, please try again.
   xiaomin.wu@10.14.6.5's password: 
   Permission denied, please try again.
   xiaomin.wu@10.14.6.5's password: 
   Permission denied (publickey,password).
   rsync: connection unexpectedly closed (0 bytes received so far) [sender]
   rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.1]
   Traceback (most recent call last):
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/mxnet/tools/launch.py", 
line 128, in 
   main()
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/mxnet/tools/launch.py", 
line 113, in main
   ssh.submit(args)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/ssh.py", 
line 86, in submit
   hostIP=args.host_ip)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/tracker.py",
 line 428, in submit
   fun_submit(nworker, nserver, envs)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/ssh.py", 
line 69, in ssh_submit
   sync_dir(local_dir, h, working_dir)
 File 
"/home/xiaomin.wu/anaconda2/lib/python2.7/site-packages/dmlc_tracker/ssh.py", 
line 21, in sync_dir
   subprocess.check_call([prog], shell = True)
 File "/home/xiaomin.wu/anaconda2/lib/python2.7/subprocess.py", line 186, 
in check_call
   raise CalledProcessError(retcode, cmd)
   subprocess.CalledProcessError: Command '['rsync -az --rsh="ssh -o 
StrictHostKeyChecking=no -p 22" /home/xiaomin.wu/cifar10_dist/ 
10.14.6.5:/home/xiaomin.wu/cifar10_dist']' returned non-zero exit status 255


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jinhuang415 closed issue #10175: MXNet MKLDNN build dependency/flow discussion

2018-08-28 Thread GitBox
jinhuang415 closed issue #10175: MXNet MKLDNN build dependency/flow discussion
URL: https://github.com/apache/incubator-mxnet/issues/10175
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] meanmee commented on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
meanmee commented on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416823724
 
 
   @eric-haibin-lin 
我可以从a无密码ssh到b,也可以从b无密码ssh到a,但不能a无密码ssh到a,b也不能无密码ssh到b。而问题应该正是如此


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yuxiangw opened a new issue #12389: [Bug] Not able to detect out of bound index on an ndarray. potential memory overflow.

2018-08-28 Thread GitBox
yuxiangw opened a new issue #12389: [Bug] Not able to detect out of bound index 
on an ndarray. potential memory overflow.
URL: https://github.com/apache/incubator-mxnet/issues/12389
 
 
   
   ## Minimum reproducible example
   
   '''
   from mxnet import ndarray as nd
   import numpy as np
   # Declare an mxnet ndarray
   x = nd.array(range(5))
   print(x)
   
   # Try an out of bound index 
   idx = [5]
   print('mxnet:',x[idx])
   '''
   
   ## Output:
   '''
   [0. 1. 2. 3. 4.]
   
   mxnet: 
   [-3.689349e+19]
   
   '''
   
   Note that there is no complaint what-so-ever.  Not even a warning.
   
   On the other hand, if you do the following instead:
   '''
   # indexing with integer
   idx=5
   x[idx]
   ''' 
   Then you get the desired error message:
   '''
   IndexError: index 5 is out of bounds for axis 0 with size 5
   '''


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da closed pull request #12151: fix a minor bug in while_loop

2018-08-28 Thread GitBox
zheng-da closed pull request #12151: fix a minor bug in while_loop
URL: https://github.com/apache/incubator-mxnet/pull/12151
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/symbol/contrib.py b/python/mxnet/symbol/contrib.py
index 38195bd62ff..f89c73164fe 100644
--- a/python/mxnet/symbol/contrib.py
+++ b/python/mxnet/symbol/contrib.py
@@ -539,9 +539,6 @@ def _union_inputs(*graphs):
 # find symbols used in either cond_g or func_g
 input_syms, ((cond_input_locs, _), (func_input_locs, func_var_locs)) = \
 _union_inputs(cond_g, func_g)
-for i_th, loc in enumerate(func_var_locs, 1):
-if loc == -1:
-raise ValueError("The %d-th loop_var doesn't involve into the 
computation" % i_th)
 result = symbol._internal._while_loop(
 # [cond, func_g, *input_syms]
 cond_g,
diff --git a/tests/python/unittest/test_contrib_control_flow.py 
b/tests/python/unittest/test_contrib_control_flow.py
index a4b794c9595..7205b55ec52 100644
--- a/tests/python/unittest/test_contrib_control_flow.py
+++ b/tests/python/unittest/test_contrib_control_flow.py
@@ -139,6 +139,30 @@ def hybrid_forward(self, F, *loop_vars):
 assert result_s.asscalar() == 0
 
 
+def test_while_loop2():
+class TestBlock(gluon.HybridBlock):
+def __init__(self, prefix=None, params=None):
+super(TestBlock, self).__init__(prefix=prefix, params=params)
+
+# In this test, body_func only accesses one of the states,
+# so not all loop variables are used.
+def hybrid_forward(self, F, data):
+def cond_func(state1, state2):
+return state1 > 0
+def body_func(state1, state2):
+return (state2, [state2 + 1, state2 + 2])
+return F.contrib.while_loop(
+cond=cond_func,
+func=body_func,
+loop_vars=[data, data + 1],
+max_iterations=10)
+
+block = TestBlock()
+block.initialize(ctx=default_context())
+block.hybridize()
+block(mx.nd.ones((1)))
+
+
 def _verify_while_loop(cond, func, loop_var_shapes, free_var_shapes, is_train, 
max_iterations, is_for, n_steps):
 
 def _create_vars(num, prefix):


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da closed issue #11448: fail to fall back when sparse arrays are passed to MKLDNN-enabled operators.

2018-08-28 Thread GitBox
zheng-da closed issue #11448: fail to fall back when sparse arrays are passed 
to MKLDNN-enabled operators.
URL: https://github.com/apache/incubator-mxnet/issues/11448
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #12102: site-wide social include

2018-08-28 Thread GitBox
aaronmarkham commented on issue #12102: site-wide social include
URL: https://github.com/apache/incubator-mxnet/pull/12102#issuecomment-416817078
 
 
   @nswamy - Do you still request changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #11448: fail to fall back when sparse arrays are passed to MKLDNN-enabled operators.

2018-08-28 Thread GitBox
pengzhao-intel commented on issue #11448: fail to fall back when sparse arrays 
are passed to MKLDNN-enabled operators.
URL: 
https://github.com/apache/incubator-mxnet/issues/11448#issuecomment-416816945
 
 
   @zheng-da related PR are merged. Could you verify if all issues are fixed 
and close the issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on a change in pull request #12387: MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package

2018-08-28 Thread GitBox
lanking520 commented on a change in pull request #12387: MXNET-873 - Bring 
Clojure Package Inline with New DataDesc and Layout in Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/12387#discussion_r213536814
 
 

 ##
 File path: 
contrib/clojure-package/examples/pre-trained-models/src/pre_trained_models/predict_image.clj
 ##
 @@ -92,7 +92,7 @@
 
 (comment
 
-  (predict 
"https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/python/predict_image/cat.jpg;)
+  (predict 
"https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/python/predict_image/cat.jpg;
 true)
 
 Review comment:
   What does this `true` mean in here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lanking520 commented on a change in pull request #12387: MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package

2018-08-28 Thread GitBox
lanking520 commented on a change in pull request #12387: MXNET-873 - Bring 
Clojure Package Inline with New DataDesc and Layout in Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/12387#discussion_r213537200
 
 

 ##
 File path: contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj
 ##
 @@ -144,7 +144,7 @@
which must be known from the rest of the net."
   ([start {:keys [step repeat dtype]
:or {step (float 1) repeat (int 1) dtype base/MX_REAL_TYPE}
-  :as opts}]
+   :as opts}]
 
 Review comment:
   space issue...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liangxi627 closed issue #12120: compile error in "image_aug_default.cc"

2018-08-28 Thread GitBox
liangxi627 closed issue #12120: compile error in "image_aug_default.cc"
URL: https://github.com/apache/incubator-mxnet/issues/12120
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] liangxi627 commented on issue #11890: Error During Make - undefined reference

2018-08-28 Thread GitBox
liangxi627 commented on issue #11890: Error During Make - undefined reference
URL: 
https://github.com/apache/incubator-mxnet/issues/11890#issuecomment-416811231
 
 
   @sandeep-krishnamurthy Please refer to the following links.
   
http://answers.opencv.org/question/95649/error-in-imwrite-code-specified-in-the-opencv-api-reference/
   
https://stackoverflow.com/questions/24439548/opencv-tutorial-load-and-display-an-image-codeblocks-fedora20


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
eric-haibin-lin commented on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416810947
 
 
   @meanmee looks like there's some problem connecting to your remote instance. 
Did you setup passwordless ssh? I suggest you move the question to 
discuss.mxnet.io which is monitored actively. Github issue is more for 
issue/bug report or task/feature requests. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
eric-haibin-lin commented on issue #12157: Subgraph API for integrating 
accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#issuecomment-416810308
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ghgggg closed issue #12267: some errors when extracting feature with mxnet c++ api in windows10 in gpu mode

2018-08-28 Thread GitBox
gh closed issue #12267: some errors when  extracting feature with mxnet c++ 
api in windows10 in gpu mode
URL: https://github.com/apache/incubator-mxnet/issues/12267
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanieYuan commented on issue #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
StephanieYuan commented on issue #12376: [MXNET-854] SVRG Optimization in 
Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#issuecomment-416799044
 
 
   Move svrg_optimization to python/contrib package.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanieYuan edited a comment on issue #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
StephanieYuan edited a comment on issue #12376: [MXNET-854] SVRG Optimization 
in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#issuecomment-416799044
 
 
   Moved svrg_optimization to python/contrib package.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da edited a comment on issue #12269: get memory error when running a model exported from gluon model zoo

2018-08-28 Thread GitBox
zheng-da edited a comment on issue #12269: get memory error when running a 
model exported from gluon model zoo
URL: 
https://github.com/apache/incubator-mxnet/issues/12269#issuecomment-416798792
 
 
   I can create an issue, so hopefully someone can address it


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #12269: get memory error when running a model exported from gluon model zoo

2018-08-28 Thread GitBox
zheng-da commented on issue #12269: get memory error when running a model 
exported from gluon model zoo
URL: 
https://github.com/apache/incubator-mxnet/issues/12269#issuecomment-416798792
 
 
   I don't. I can create an issue, so hopefully someone can address it


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jasonyu1996 edited a comment on issue #12327: [Feature Request] support of diag for N-d arrays

2018-08-28 Thread GitBox
jasonyu1996 edited a comment on issue #12327: [Feature Request] support of diag 
for N-d arrays
URL: 
https://github.com/apache/incubator-mxnet/issues/12327#issuecomment-416790602
 
 
   Hi! Thank you for your response! I just paid a visit to the numpy interfaces 
for computing the diagonal, and noticed that besides `numpy.diag` (which is 
exactly where our the design of our `diag` operator comes from) numpy provides 
a second diagonal extracting function `numpy.diagonal` 
(https://www.numpy.org/devdocs/reference/generated/numpy.diagonal.html), which 
in my opinion is a good reference for extending the functionality of our `diag` 
operator (I also noticed that `numpy.trace` also supports N-d arrays and 
accepts the same set of arguments as `numpy.diagonal`). However, I am not sure 
whether a new operator should be added or not. I wonder why numpy provides two 
functions, one strictly weaker than the other, which do the same thing.
   
   As for the implementation detail, I have to admit that I am not familiar 
with this and am therefore not sure about the possibility of further improving 
the performance by implementing it in ways other than simply fusing some 
high-level function calls together. I think it is possibly necessary to refer 
to the implementation of the 2-d case, which it seems does not depend on other 
high-level function calls (`diag` for 2-d arrays can also be implemented with 
an `arange` followed by a `pick`).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jasonyu1996 commented on issue #12327: [Feature Request] support of diag for N-d arrays

2018-08-28 Thread GitBox
jasonyu1996 commented on issue #12327: [Feature Request] support of diag for 
N-d arrays
URL: 
https://github.com/apache/incubator-mxnet/issues/12327#issuecomment-416790602
 
 
   Hi! Thank you for your response! I just paid a visit to the numpy interfaces 
for computing the diagonal, and noticed that besides `numpy.diag` (which is 
exactly where our the design of our `diag` operator comes from) numpy provides 
a second diagonal extracting function `numpy.diagonal` 
(https://www.numpy.org/devdocs/reference/generated/numpy.diagonal.html), which 
in my opinion is a good reference for extending the functionality of our `diag` 
operator. However, I am not sure whether a new operator should be added or not. 
I wonder why numpy provides two functions, one strictly weaker than the other, 
which do the same thing.
   
   As for the implementation detail, I have to admit that I am not familiar 
with this and am therefore not sure about the possibility of further improving 
the performance by implementing it in ways other than simply fusing some 
high-level function calls together. I think it is possibly necessary to refer 
to the implementation of the 2-d case, which it seems does not depend on other 
high-level function calls (`diag` for 2-d arrays can also be implemented with 
an `arange` followed by a `pick`).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] pengzhao-intel commented on issue #12269: get memory error when running a model exported from gluon model zoo

2018-08-28 Thread GitBox
pengzhao-intel commented on issue #12269: get memory error when running a model 
exported from gluon model zoo
URL: 
https://github.com/apache/incubator-mxnet/issues/12269#issuecomment-416790167
 
 
   @zheng-da do you have a plan to improve it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 5ac893f  Bump the publish timestamp.
5ac893f is described below

commit 5ac893f88c7723e087f0494dd11feb74d52b
Author: mxnet-ci 
AuthorDate: Wed Aug 29 00:55:53 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..ca60c5c
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Aug 29 00:55:53 UTC 2018



[GitHub] aaronmarkham opened a new pull request #12388: Installation instructions consolidation

2018-08-28 Thread GitBox
aaronmarkham opened a new pull request #12388: Installation instructions 
consolidation
URL: https://github.com/apache/incubator-mxnet/pull/12388
 
 
   ## Description ##
   Different installation pages were out of date or out of sync. The install 
page was over-complicated. The build from source instructions weren't really 
working.
   
   This PR consolidates install info, directs users to where the more recent 
updates happen to be, and simplifies the basic user install. I started off just 
trying to fix the C++ setup info, but found so many other issues, I worked on 
those too.
   
   ## Features
   * Fixes a bug that made it so you couldn't preview the dev version's install 
page without it redirecting you. (options.js)
   * Updates the 
[build_from_source](http://34.201.8.176/versions/cplusplus_install/install/build_from_source.html)
 page 
   - Point to the OS-focused pages where this information is most 
up-to-date.
   - Added a [table that maps links to each 
binding](http://34.201.8.176/versions/cplusplus_install/install/build_from_source.html#installing-mxnet-language-bindings).
   - Leaves the centos/other linux sections there since this info isn't 
covered elsewhere
   - Leaves the NCCL section alone (is this even current info?)
   - Links this page in the [Download Source 
section](http://34.201.8.176/versions/cplusplus_install/install/index.html) 
found at the bottom of install
   * Creates a new [C++ setup 
page](http://34.201.8.176/versions/cplusplus_install/install/c_plus_plus.html) 
that brings together C++ info found around the project.
   * Refactors the install page. 
   - Moves the [validation section to a new 
page](http://34.201.8.176/versions/cplusplus_install/install/validate_mxnet.html)
   - Removes virtualenv (this was repeated multiple times in the file w/o 
much real value
   - Super simplifies the pip install sections
   - Links back to each OS's detailed setup guides
   - Links to specific install guide sections for Perl and Julia
   - and more... the file went from 2531 lines to 1062. So much easier to 
deal with... probably could take more pruning...
   * Fixes some issues on the OS guide pages - the Python binding steps were 
missing
   
   ## Preview
   http://34.201.8.176/versions/cplusplus_install/install/index.html
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky edited a comment on issue #12327: [Feature Request] support of diag for N-d arrays

2018-08-28 Thread GitBox
samskalicky edited a comment on issue #12327: [Feature Request] support of diag 
for N-d arrays
URL: 
https://github.com/apache/incubator-mxnet/issues/12327#issuecomment-416780686
 
 
   Hi @jasonyu1996. Thanks for reporting this. Ive been looking to implement a 
trace operator in mxnet per this request #10500. In preparing for implementing 
trace (which is really just summing the diagonal of the matix) I also noticed 
the limited implementation of the diag operator. 
   
   Given that many MXNet users have data in the form of WxHxC (width, height, 
channel), and then add a 4th dimension for number/batch, what are your thoughts 
on general N-dimensionality approach for this operator? Is it necessary to 
support the general case?
   
   And how about implementation of the diag operator, as you mentioned there 
are already existing and high performance implementation for each 
sub-computation required. Do you think there is opportunity for further 
performance improvement (memory, time, etc.) by fusing these together? Or do 
you think it would be best to just implement diag calling these 
sub-computations separately (inside the diag operator)?
   
   Let me know if you have thoughts on this. I would be interested in working 
with you to implement this as well.
   
   Heres the original diag issue: #9253


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] samskalicky commented on issue #12327: [Feature Request] support of diag for N-d arrays

2018-08-28 Thread GitBox
samskalicky commented on issue #12327: [Feature Request] support of diag for 
N-d arrays
URL: 
https://github.com/apache/incubator-mxnet/issues/12327#issuecomment-416780686
 
 
   Hi @jasonyu1996. Thanks for reporting this. Ive been looking to implement a 
trace operator in mxnet per this request #10500. In preparing for implementing 
trace (which is really just summing the diagonal of the matix) I also noticed 
the limited implementation of the diag operator. 
   
   Given that many MXNet users have data in the form of WxHxC (width, height, 
channel), and then add a 4th dimension for number/batch, what are your thoughts 
on general N-dimensionality approach for this operator? Is it necessary to 
support the general case?
   
   And how about implementation of the diag operator, as you mentioned there 
are already existing and high performance implementation for each 
sub-computation required. Do you think there is opportunity for further 
performance improvement (memory, time, etc.) by fusing these together? Or do 
you think it would be best to just implement diag calling these 
sub-computations separately (inside the diag operator)?
   
   Let me know if you have thoughts on this. I would be interested in working 
with you to implement this as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia edited a comment on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
ankkhedia edited a comment on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416779364
 
 
   looks good to me!
   Should be good to go after you retriever CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia edited a comment on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
ankkhedia edited a comment on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416779364
 
 
   looks good to me!
   Should be good to go after you retrigger CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
ankkhedia commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416779364
 
 
   looks good to me!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #8866: src/operator/./bilinear_sampler-inl.h:105: Have not implemented the data req combinations! gdata_req=0 ggrid_req=1

2018-08-28 Thread GitBox
haojin2 commented on issue #8866: src/operator/./bilinear_sampler-inl.h:105: 
Have not implemented the data req combinations! gdata_req=0 ggrid_req=1
URL: 
https://github.com/apache/incubator-mxnet/issues/8866#issuecomment-416778608
 
 
   A fix delivered in #12386, but corresponding unit test currently blocked by 
the flakiness of test_bilinear_sampler


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
hetong007 commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416778171
 
 
   @terrytangyuan You can trigger the CI again, with an empty commit by
   `git commit --allow-empty -m "Trigger CI"`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ankkhedia commented on issue #12310: Flaky test: test_ndarray.test_order

2018-08-28 Thread GitBox
ankkhedia commented on issue #12310: Flaky test: test_ndarray.test_order
URL: 
https://github.com/apache/incubator-mxnet/issues/12310#issuecomment-416777661
 
 
   @sxjscience The issue seems to be only with ret_type = "mask" for topk 
operator


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #12310: Flaky test: test_ndarray.test_order

2018-08-28 Thread GitBox
sxjscience commented on issue #12310: Flaky test: test_ndarray.test_order
URL: 
https://github.com/apache/incubator-mxnet/issues/12310#issuecomment-416775348
 
 
   Thanks for reporting this. I find we can use other dtypes
   ```python
   import mxnet as mx
   import numpy as np
   dat_size=5
   dtype=np.int32
   a_npy= np.arange(dat_size ** 4, dtype=dtype).reshape((dat_size, dat_size, 
dat_size, dat_size))
   a_nd = mx.nd.array(a_npy, ctx=mx.gpu(0), dtype=dtype)
   nd_ret_topk = mx.nd.topk(a_nd, axis=1, k=2, ret_typ="mask", is_ascend=False)
   print(nd_ret_topk.dtype)
   print(nd_ret_topk)
   ```
   
   I'm looking for the bug in the code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid commented on issue #12387: MXNET-873 - Bring Clojure Package inline with new DataDesc and Layout in Scala Package

2018-08-28 Thread GitBox
gigasquid commented on issue #12387: MXNET-873 - Bring Clojure Package inline 
with new DataDesc and Layout in Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/12387#issuecomment-416772699
 
 
   @lanking520 please give a look when you get a chance


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] gigasquid opened a new pull request #12387: MXNET-873 - Bring Clojure Package inline with new DataDesc and Layout in Scala Package

2018-08-28 Thread GitBox
gigasquid opened a new pull request #12387: MXNET-873 - Bring Clojure Package 
inline with new DataDesc and Layout in Scala Package
URL: https://github.com/apache/incubator-mxnet/pull/12387
 
 
   ## Description ##
   The Scala package has updated the DataDesc to include Layout. The Clojure 
package has been updated to move inline with it.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X] Changes are complete (i.e. I finished coding on this PR)
   - [X] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - A layout namespace has been created and used in DataDesc
   - The Clojure package no longer infers the layout from the shape. It handles 
it the same way as the Scala package by having it be undefined
   - The original `module.fit` interop code has been restored. The work arounds 
required by the old DataDesc has been resolved with this PR 
https://github.com/apache/incubator-mxnet/pull/11844
   - Most of the examples have moved to use `provide-data-desc` instead of 
`provide-data`
   - Formatting fixes
   - Tweak to Char RNN example to run for less epochs before showing the 
pre-loaded result
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213503117
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,774 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./subgraph_property.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+#define DEBUG_SUBGRAPH 0
+
+namespace sg {  // sg stands for subgraph
+
+struct SimpleNode;
+using SimpleNodePtr = std::shared_ptr;
+
+/*!
+ * \brief Node of the undirected graph which replicates the network structures
+ * of the computational graph. It is used to ease the graph traversal for 
finding
+ * subgraphs.
+ */
+struct SimpleNode {
+  static SimpleNodePtr Create() {
+return std::make_shared();
+  }
+  SimpleNode() : label(-1), node(nullptr) {}
+  /*! subgraph label */
+  int label;
+  /*! the original node in the computational graph it references*/
+  nnvm::Node* node;
+  /*!
+   * \brief output nodes of the current node
+   * key is node ptr and value is an array of indices standing for the entry 
indices
+   * in key->inputs whose source is the current node.
+   */
+  std::unordered_map> outputs;
+};  // struct SimpleNode
+
+#if DEBUG_SUBGRAPH
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should 

[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213503015
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,774 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./subgraph_property.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+#define DEBUG_SUBGRAPH 0
+
+namespace sg {  // sg stands for subgraph
+
+struct SimpleNode;
+using SimpleNodePtr = std::shared_ptr;
+
+/*!
+ * \brief Node of the undirected graph which replicates the network structures
+ * of the computational graph. It is used to ease the graph traversal for 
finding
+ * subgraphs.
+ */
+struct SimpleNode {
+  static SimpleNodePtr Create() {
+return std::make_shared();
+  }
+  SimpleNode() : label(-1), node(nullptr) {}
+  /*! subgraph label */
+  int label;
+  /*! the original node in the computational graph it references*/
+  nnvm::Node* node;
+  /*!
+   * \brief output nodes of the current node
+   * key is node ptr and value is an array of indices standing for the entry 
indices
+   * in key->inputs whose source is the current node.
+   */
+  std::unordered_map> outputs;
+};  // struct SimpleNode
+
+#if DEBUG_SUBGRAPH
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should 

[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213502884
 
 

 ##
 File path: src/operator/subgraph/subgraph_property.h
 ##
 @@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_SUBGRAPH_PROPERTY_H_
+#define MXNET_OPERATOR_SUBGRAPH_SUBGRAPH_PROPERTY_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace mxnet {
+namespace op {
+
+/*
+ * This provides criteria for selecting nodes in a subgraph.
+ * When a node is passed to this object, the selection criteria may be changed.
+ * We can also specify what links we should use when traversing the neighbor
+ * nodes.
+ */
+class SubgraphSelector {
+ public:
+  virtual ~SubgraphSelector() {}
+  // Determine if the node should be selected for a subgraph.
+  virtual bool Select(const nnvm::Node ) = 0;
+  // Determine if the input node should be selected for a subgraph.
+  virtual bool SelectInput(const nnvm::Node , const nnvm::Node _node) = 
0;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213502853
 
 

 ##
 File path: src/operator/subgraph/common.h
 ##
 @@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef MXNET_OPERATOR_SUBGRAPH_COMMON_H_
+#define MXNET_OPERATOR_SUBGRAPH_COMMON_H_
+
+#include 
+#include 
+#include 
+#include "../elemwise_op_common.h"
+#include "../../executor/exec_pass.h"
+
+namespace mxnet {
+namespace op {
+
+inline uint32_t DefaultSubgraphOpNumInputs(const nnvm::NodeAttrs& attrs) {
+  const nnvm::Symbol& sym = *attrs.subgraphs[0];
+  return sym.ListInputNames(nnvm::Symbol::kAll).size();
+}
+
+inline uint32_t DefaultSubgraphOpNumOutputs(const nnvm::NodeAttrs& attrs) {
+  const nnvm::Symbol& sym = *attrs.subgraphs[0];
+  return sym.ListOutputNames().size();
+}
+
+inline std::vector DefaultSubgraphOpListInputs(const 
nnvm::NodeAttrs& attrs) {
+  const nnvm::Symbol& sym = *attrs.subgraphs[0];
+  return sym.ListInputNames(nnvm::Symbol::kAll);
+}
+
+inline std::vector DefaultSubgraphOpListOutputs(const 
nnvm::NodeAttrs& attrs) {
+  const nnvm::Symbol& sym = *attrs.subgraphs[0];
+  return sym.ListOutputNames();
+}
+
+inline bool DefaultSubgraphOpShape(const nnvm::NodeAttrs& attrs,
+   std::vector *in_shapes,
+   std::vector *out_shapes) {
+  using namespace exec;
+  const nnvm::Symbol& subgraph_sym = *attrs.subgraphs[0];
+  nnvm::Graph g;
+  g.outputs = subgraph_sym.outputs;
+  const auto& idx_g = g.indexed_graph();
+  CHECK_EQ(idx_g.input_nodes().size(), in_shapes->size());
+  CHECK_EQ(idx_g.outputs().size(), out_shapes->size());
+
+  // Put the input and output shapes to the shape vector.
+  nnvm::ShapeVector shapes(idx_g.num_node_entries());
+  const auto _nids = idx_g.input_nodes();
+  CHECK_EQ(input_nids.size(), in_shapes->size());
+  for (size_t i = 0; i < in_shapes->size(); i++) {
+auto eid = idx_g.entry_id(input_nids[i], 0);
+shapes[eid] = in_shapes->at(i);
+  }
+  CHECK_EQ(g.outputs.size(), out_shapes->size());
+  for (size_t i = 0; i < out_shapes->size(); i++) {
+auto eid = idx_g.entry_id(g.outputs[i]);
+shapes[eid] = out_shapes->at(i);
+  }
+
+  // Infer shape of the graph.
+  g.attrs["shape"] = std::make_shared(std::move(shapes));
+  g = exec::InferShape(std::move(g));
+
+  // Copy the inferred shape back to the input shapes and the output shapes.
+  shapes = g.GetAttr("shape");
+  // assign to in_shapes
+  for (size_t i = 0; i < in_shapes->size(); ++i) {
+const auto eid = idx_g.entry_id(input_nids[i], 0);
+SHAPE_ASSIGN_CHECK(*in_shapes, i, shapes[eid]);
+  }
+  // assign to out_shapes
+  for (size_t i = 0; i < g.outputs.size(); ++i) {
+const auto eid = idx_g.entry_id(g.outputs[i]);
+SHAPE_ASSIGN_CHECK(*out_shapes, i, shapes[eid]);
+  }
+  // Check if we have inferred the shapes correctly.
+  return g.GetAttr("shape_num_unknown_nodes") == 0;
+}
+
+inline bool DefaultSubgraphOpType(const nnvm::NodeAttrs& attrs,
+  std::vector *in_types,
+  std::vector *out_types) {
+  const nnvm::Symbol& subgraph_sym = *attrs.subgraphs[0];
+  nnvm::Graph g;
+  g.outputs = subgraph_sym.outputs;
+  const auto& idx_g = g.indexed_graph();
+  CHECK_EQ(idx_g.input_nodes().size(), in_types->size());
+  CHECK_EQ(idx_g.outputs().size(), out_types->size());
+
+  // Put the input and output data types to the dtype vector.
+  nnvm::DTypeVector types(idx_g.num_node_entries(), -1);
+  const auto _nids = idx_g.input_nodes();
+  CHECK_EQ(input_nids.size(), in_types->size());
+  for (size_t i = 0; i < in_types->size(); i++) {
+auto eid = idx_g.entry_id(input_nids[i], 0);
+types[eid] = in_types->at(i);
+  }
+  CHECK_EQ(g.outputs.size(), out_types->size());
+  for (size_t i = 0; i < out_types->size(); i++) {
+auto eid = idx_g.entry_id(g.outputs[i]);
+types[eid] = out_types->at(i);
+  }
+
+  // Infer data type of the graph.
+  g.attrs["dtype"] = std::make_shared(std::move(types));
+  g = exec::InferType(std::move(g));
+
+  types = g.GetAttr("dtype");

[GitHub] reminisce commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
reminisce commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213502485
 
 

 ##
 File path: src/executor/graph_executor.cc
 ##
 @@ -42,6 +43,7 @@ using namespace mxnet::common;
 GraphExecutor::GraphExecutor() {
   log_verbose_ = dmlc::GetEnv("MXNET_EXEC_VERBOSE_LOGGING", false);
   need_grad_ = false;
+  subgraph_property_ = dmlc::GetEnv("MXNET_SUBGRAPH_BACKEND", std::string());
 
 Review comment:
   Right now, there is no real accelerator integrated with MXNet. Intel team is 
going to submit their work on MKLDNN using this API set. We can add this evn 
var along with their PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12386: [MXNET-810] [WIP] Add support for more req patterns for bilinear sampler backward

2018-08-28 Thread GitBox
haojin2 commented on issue #12386: [MXNET-810] [WIP] Add support for more req 
patterns for bilinear sampler backward
URL: https://github.com/apache/incubator-mxnet/pull/12386#issuecomment-416768267
 
 
   @eric-haibin-lin @reminisce @piiswrong @anirudh2290 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 opened a new pull request #12386: [MXNET-810] [WIP] Add support for more req patterns for bilinear sampler backward

2018-08-28 Thread GitBox
haojin2 opened a new pull request #12386: [MXNET-810] [WIP] Add support for 
more req patterns for bilinear sampler backward
URL: https://github.com/apache/incubator-mxnet/pull/12386
 
 
   ## Description ##
   Fix #8866.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Support more req type patterns for backward of bilinear sampler
   - [ ] Corresponding unit tests
   
   ## Comments ##
   Currently test_bilinear_sampler is still marked as flaky, cannot un-leash 
the unit test for that, so blocked at this moment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] metrofun edited a comment on issue #2811: where can i find rbm example by mxnet?

2018-08-28 Thread GitBox
metrofun edited a comment on issue #2811: where can i find rbm example by mxnet?
URL: 
https://github.com/apache/incubator-mxnet/issues/2811#issuecomment-416764491
 
 
   The second post on RBM in MXNet was low on my priority, but here is the code 
I wrote for the second post
   
   https://gist.github.com/metrofun/5c76dc280b8ce19e56d673e19dc262c4 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] metrofun commented on issue #2811: where can i find rbm example by mxnet?

2018-08-28 Thread GitBox
metrofun commented on issue #2811: where can i find rbm example by mxnet?
URL: 
https://github.com/apache/incubator-mxnet/issues/2811#issuecomment-416764491
 
 
   Second post on RBM in MXNet was low on my priority, but here is the code I 
wrote for the second post
   
   https://gist.github.com/metrofun/5c76dc280b8ce19e56d673e19dc262c4 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor commented on issue #2811: where can i find rbm example by mxnet?

2018-08-28 Thread GitBox
vishaalkapoor commented on issue #2811: where can i find rbm example by mxnet?
URL: 
https://github.com/apache/incubator-mxnet/issues/2811#issuecomment-416758822
 
 
   @yzhliu Would you be able to close? Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] terrytangyuan commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
terrytangyuan commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416758686
 
 
   I believe that should be done in a separate PR. I doubt that `formatR` gives 
the most readable code, e.g. I've rarely seen the following style but it 
appears to be on the PR you referred to:
   ```
   convolution_module <- function(net, kernel_size, pad_size, filter_count, 
stride = c(1, 
 1), work_space = 2048, batch_norm = TRUE, down_pool = FALSE, up_pool = 
FALSE, 
   ```
   or
   ```
   if (Sys.getenv("R_GPU_ENABLE") != "" & 
as.integer(Sys.getenv("R_GPU_ENABLE")) == 
 1) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk edited a comment on issue #12291: [MXNET-817] Fixes to speech recognition example

2018-08-28 Thread GitBox
vandanavk edited a comment on issue #12291: [MXNET-817] Fixes to speech 
recognition example
URL: https://github.com/apache/incubator-mxnet/pull/12291#issuecomment-416704408
 
 
   @anirudhacharya Opened an issue 
https://github.com/apache/incubator-mxnet/issues/12384 for tracking the refactor


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #12379: Revert "Revert "Disable kvstore test (#11798)" (#12279)"

2018-08-28 Thread GitBox
marcoabreu commented on issue #12379: Revert "Revert "Disable kvstore test 
(#11798)" (#12279)"
URL: https://github.com/apache/incubator-mxnet/pull/12379#issuecomment-416752760
 
 
   I can only merge if this PR passes CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #10948: get stuck with subprocess in multithread

2018-08-28 Thread GitBox
Roshrini commented on issue #10948: get stuck with subprocess in multithread
URL: 
https://github.com/apache/incubator-mxnet/issues/10948#issuecomment-416740256
 
 
   @HorsonLiu MXNet is not very thread-safe and so this is not supported yet. 
Adding it as a FeatureRequest.
   @sandeep-krishnamurthy Can you please tag this as  [FeatureRequest, Backend]


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk edited a comment on issue #12182: [MXNET-698] Remove Epoch training metric log

2018-08-28 Thread GitBox
vandanavk edited a comment on issue #12182: [MXNET-698] Remove Epoch training 
metric log
URL: https://github.com/apache/incubator-mxnet/pull/12182#issuecomment-416634452
 
 
   @nswamy The following is the current behavior of the code. We need to 
reflect this in the "train-accuracy" log in base_module.py
   
   With Speedometer's `auto_reset=True`
   
   ```
   INFO:root:Epoch[0] Batch [1-100]  Speed: 45690.13 samples/sec 
accuracy=0.772123
   INFO:root:Epoch[0] Batch [101-200]  Speed: 50611.24 samples/sec 
accuracy=0.898594
   .
   .
   .
   INFO:root:Epoch[0] Batch [801-900]  Speed: 52047.39 samples/sec 
accuracy=0.950625
   INFO:root:Epoch[0] Batch [901-938] Train-accuracy=0.944679
   INFO:root:Epoch[0] Time cost=1.250
   INFO:root:Epoch[0] Validation-accuracy=0.953125
   ```
   
   With `auto_reset=False`
   
   ```
   INFO:root:Epoch[0] Batch [1-100] Speed: 16628.82 samples/sec 
accuracy=0.759746
   INFO:root:Epoch[0] Batch [1-200] Speed: 40806.39 samples/sec 
accuracy=0.828980
   .
   .
   .
   INFO:root:Epoch[0] Batch [1-900] Speed: 42613.81 samples/sec 
accuracy=0.911470
   INFO:root:Epoch[0] Batch [1-938] Train-accuracy=0.912830
   INFO:root:Epoch[0] Time cost=1.811
   INFO:root:Epoch[0] Validation-accuracy=0.956509
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 closed pull request #12306: SoftMin Operator

2018-08-28 Thread GitBox
haojin2 closed pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/contrib/ctc_loss-inl.h 
b/src/operator/contrib/ctc_loss-inl.h
index 72209ae286c..9380be47451 100644
--- a/src/operator/contrib/ctc_loss-inl.h
+++ b/src/operator/contrib/ctc_loss-inl.h
@@ -409,7 +409,8 @@ class CTCLossOp : public Operator {
 
 // since the input is activation before softmax and cudnn ctc takes softmax
 // apply softmax to inputs first.
-mxnet_op::Softmax(s, data.dptr_, prob.dptr_, 
data.shape_, 2, 1.0);
+mxnet_op::Softmax(
+  s, data.dptr_, prob.dptr_, data.shape_, 2, 1.0);
 
 CUDNN_CALL(cudnnCTCLoss(s->dnn_handle_,
 prob_desc_,
@@ -426,8 +427,8 @@ class CTCLossOp : public Operator {
 workspace_bytes));
 
 if (req_grad) {
-  mxnet_op::SoftmaxGrad(s,
-  prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
+  mxnet_op::SoftmaxGrad(
+s, prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0);
   Assign(grad, mxnet::kWriteInplace, grad * alphabet_size);
 }
   }
diff --git a/src/operator/nn/softmax-inl.h b/src/operator/nn/softmax-inl.h
index 4a19db7c36b..c063e385f63 100644
--- a/src/operator/nn/softmax-inl.h
+++ b/src/operator/nn/softmax-inl.h
@@ -51,7 +51,7 @@ struct log_softmax_fwd {
 };
 
 
-template
+template
 inline void Softmax(Stream *s, DType *in, DType *out,
 Shape shape, int axis, const DType temperature) {
   index_t M = shape[axis];
@@ -65,30 +65,37 @@ inline void Softmax(Stream *s, DType *in, DType *out,
   for (int i = 0; i < static_cast(N); ++i) {
 index_t base = unravel_dot(i, sshape, stride);
 
-DType mmax = in[base];
+DType mmax = negate ? -in[base] : in[base];
+DType val;
 for (index_t j = 1; j < M; ++j) {
-  if (mmax < in[base + j*sa]) mmax = in[base + j*sa];
+  val = negate ? -in[base + j*sa] : in[base + j*sa];
+  if (mmax < val) mmax = val;
 }
 
 DType sum = DType(0);
+DType in_val;
 // By default temperature is 1.0, and only in reinforcement training
 // users would set it to other values.
 // Adding a branch here to save the CPU 'divide-by-1' computation at 
runtime
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp(in[base + j*sa] - mmax);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp(in_val - mmax);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map(in[base + j*sa] - mmax, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map(in_val - mmax, sum);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-sum += std::exp((in[base + j*sa] - mmax)/temperature);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+sum += std::exp((in_val - mmax)/temperature);
   }
 
   for (index_t j = 0; j < M; ++j) {
-out[base + j*sa] = OP::Map((in[base + j*sa] - mmax)/temperature, sum);
+in_val = negate ? -in[base + j*sa] : in[base + j*sa];
+out[base + j*sa] = OP::Map((in_val - mmax)/temperature, sum);
   }
 }
   }
@@ -111,7 +118,7 @@ struct log_softmax_bwd {
 };
 
 
-template
+template
 inline void SoftmaxGrad(Stream *s, DType *out, DType *ograd,
 DType *igrad, Shape shape, int axis,
 const DType temperature) {
@@ -137,12 +144,16 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 DType final_result;
 if (temperature == 1.0) {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
+final_result = negate ?
+   -OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) :
+   OP2::Map(ograd[base + j*sa], out[base + j*sa], sum);
 KERNEL_ASSIGN(igrad[base + j*sa], Req, final_result);
   }
 } else {
   for (index_t j = 0; j < M; ++j) {
-final_result = OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature;
+final_result = negate ?
+   -OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature :
+   OP2::Map(ograd[base + j*sa], out[base + j*sa], sum) / 
temperature;
 KERNEL_ASSIGN(igrad[base + j*sa], Req, final_result);
   }
 }
@@ -151,7 +162,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 
 
 #ifdef __CUDACC__
-template
+template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,
 

[GitHub] haojin2 opened a new pull request #12306: SoftMin Operator

2018-08-28 Thread GitBox
haojin2 opened a new pull request #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306
 
 
   ## Description ##
   Support softmin function: 
https://pytorch.org/docs/master/_modules/torch/nn/functional.html#softmin
   
   ## Checklist ##
   ### Essentials ###
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Softmin operator
   - [x] Corresponding unit tests
   
   ## Comments ##
   Passed more than 1 trials on both CPU & GPU:
   ```
   MXNET_TEST_COUNT=1 nosetests -s --verbose 
tests/python/unittest/test_operator.py:test_softmin
   [INFO] Setting module np/mx/python random seeds, use 
MXNET_MODULE_SEED=53121875 to reproduce.
   test_operator.test_softmin ... ok
   
   --
   Ran 1 test in 670.955s
   
   OK
   ```
   ```
   MXNET_TEST_COUNT=1 nosetests -s --verbose 
tests/python/gpu/test_operator_gpu.py:test_softmin
   [INFO] Setting module np/mx/python random seeds, use 
MXNET_MODULE_SEED=207344053 to reproduce.
   test_operator_gpu.test_softmin ... ok
   
   --
   Ran 1 test in 946.723s
   
   OK
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #12306: SoftMin Operator

2018-08-28 Thread GitBox
haojin2 commented on issue #12306: SoftMin Operator
URL: https://github.com/apache/incubator-mxnet/pull/12306#issuecomment-416730180
 
 
   @szha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
anirudhacharya commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416723075
 
 
   @terrytangyuan Because we are in the process of having uniform standards for 
the R-package and eventually have lintR run on the pipeline. For example - 
https://github.com/apache/incubator-mxnet/pull/12360


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] terrytangyuan commented on issue #12162: Edit shape.array doc and some style improvements

2018-08-28 Thread GitBox
terrytangyuan commented on issue #12162: Edit shape.array doc and some style 
improvements
URL: https://github.com/apache/incubator-mxnet/pull/12162#issuecomment-416712675
 
 
   What's the difference? I am just making some enhancements myself manually 
for this particular file. All the changes are valid here. Others feel free to 
use formatR themselves for enhancement for the overall package. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mseth10 opened a new pull request #12385: fixed flaky test issue for test_operator_gpu.test_convolution_grouping

2018-08-28 Thread GitBox
mseth10 opened a new pull request #12385: fixed flaky test issue for 
test_operator_gpu.test_convolution_grouping
URL: https://github.com/apache/incubator-mxnet/pull/12385
 
 
   ## Description ##
   Issue not reproducible on Ubuntu. Tolerance parameter (atol) relaxed. This 
should fix the flakiness issue on Windows #12219 since mismatch percentage is 
small.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x]  Tolerance parameter atol modified to 1e-3
   
   ## Comments ##
   - Passed more than 10,000 times on GPU
   - @haojin2
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu closed pull request #12381: A solution to prevent zombie containers locally and in CI

2018-08-28 Thread GitBox
marcoabreu closed pull request #12381: A solution to prevent zombie containers 
locally and in CI
URL: https://github.com/apache/incubator-mxnet/pull/12381
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/ci/README.md b/ci/README.md
index 548e9cb9b04..69308756943 100644
--- a/ci/README.md
+++ b/ci/README.md
@@ -59,6 +59,20 @@ To work inside a container with a shell you can do:
 When building, the artifacts are located in the build/ directory in the 
project root. In case
 `build.py -a` is invoked, the artifacts are located in build./
 
+# Docker container cleanup (Zombie containers)
+Docker has a client-server architecture, so when the program that is executing 
the docker client
+dies or receieves a signal, the container keeps running as it's started by the 
docker daemon.
+We implement signal handlers that catch sigterm and sigint and cleanup 
containers before exit. In
+Jenkins there's not enough time between sigterm and sigkill so we guarantee 
that containers are not
+left running by propagating environment variables used by the Jenkins process 
tree killer to
+identify which process to kill when the job is stopped. This has the effect of 
stopping the
+container given that the process inside the container is terminated.
+
+How to test this is working propperly: On the console you can hit ^C while a 
container is running
+(not just building) and see that the container is stopped by running `docker 
ps` on another
+terminal. In Jenkins this has been tested by stopping the job which has 
containers running and
+verifying that the container stops shortly afterwards by running docker ps.
+
 ## Add a platform
 
 To add a platform, you should add the appropriate dockerfile in
diff --git a/ci/build.py b/ci/build.py
index f1a5e99e2d0..df9e97bdb5f 100755
--- a/ci/build.py
+++ b/ci/build.py
@@ -23,26 +23,67 @@
 """
 
 __author__ = 'Marco de Abreu, Kellen Sunderland, Anton Chernov, Pedro Larroy'
-__version__ = '0.2'
+__version__ = '0.3'
 
 import argparse
 import glob
 import logging
+import os
 import re
 import shutil
 import subprocess
 import sys
 import tempfile
-from copy import deepcopy
 from itertools import chain
-from subprocess import call, check_call, check_output
+from subprocess import check_call, check_output
 from typing import *
 from util import *
+import docker
+import docker.models
+import docker.errors
+import signal
+import atexit
 import pprint
-import requests
 
 
-CCACHE_MAXSIZE = '500G'
+class Cleanup:
+"""A class to cleanup containers"""
+def __init__(self):
+self.containers = set()
+self.docker_stop_timeout = 3
+
+def add_container(self, container: docker.models.containers.Container):
+assert isinstance(container, docker.models.containers.Container)
+self.containers.add(container)
+
+def remove_container(self, container: docker.models.containers.Container):
+assert isinstance(container, docker.models.containers.Container)
+self.containers.remove(container)
+
+def _cleanup_containers(self):
+if self.containers:
+logging.warning("Cleaning up containers")
+else:
+return
+# noinspection PyBroadException
+try:
+stop_timeout = int(os.environ.get("DOCKER_STOP_TIMEOUT", 
self.docker_stop_timeout))
+except Exception:
+stop_timeout = 3
+for container in self.containers:
+try:
+container.stop(timeout=stop_timeout)
+logging.info("☠: stopped container %s", 
trim_container_id(container.id))
+container.remove()
+logging.info(": removed container %s", 
trim_container_id(container.id))
+except Exception as e:
+logging.exception(e)
+self.containers.clear()
+logging.info("Cleaning up containers finished.")
+
+def __call__(self):
+"""Perform cleanup"""
+self._cleanup_containers()
 
 
 def get_dockerfiles_path():
@@ -115,7 +156,10 @@ def run_cmd():
 run_cmd()
 # Get image id by reading the tag. It's guaranteed (except race condition) 
that the tag exists. Otherwise, the
 # check_call would have failed
-return _get_local_image_id(docker_binary=docker_binary, docker_tag=tag)
+image_id = _get_local_image_id(docker_binary=docker_binary, docker_tag=tag)
+if not image_id:
+raise FileNotFoundError('Unable to find docker image id matching with 
{}'.format(tag))
+return image_id
 
 
 def _get_local_image_id(docker_binary, docker_tag):
@@ -137,10 +181,11 @@ def buildir() -> str:
 
 
 def default_ccache_dir() -> str:
+""":return: ccache directory for the current platform"""
 # Share ccache across containers
 if 'CCACHE_DIR' in 

[incubator-mxnet] branch master updated: A solution to prevent zombie containers locally and in CI (#12381)

2018-08-28 Thread marcoabreu
This is an automated email from the ASF dual-hosted git repository.

marcoabreu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new e2a3eef  A solution to prevent zombie containers locally and in CI 
(#12381)
e2a3eef is described below

commit e2a3eef349cb6643c08a7840d8cbd43b38fedfd5
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Tue Aug 28 21:16:31 2018 +0200

A solution to prevent zombie containers locally and in CI (#12381)

Fix pylint, mypy, and pycharm code inspection warnings
---
 ci/README.md |  14 +++
 ci/build.py  | 304 ---
 2 files changed, 243 insertions(+), 75 deletions(-)

diff --git a/ci/README.md b/ci/README.md
index 548e9cb..6930875 100644
--- a/ci/README.md
+++ b/ci/README.md
@@ -59,6 +59,20 @@ To work inside a container with a shell you can do:
 When building, the artifacts are located in the build/ directory in the 
project root. In case
 `build.py -a` is invoked, the artifacts are located in build./
 
+# Docker container cleanup (Zombie containers)
+Docker has a client-server architecture, so when the program that is executing 
the docker client
+dies or receieves a signal, the container keeps running as it's started by the 
docker daemon.
+We implement signal handlers that catch sigterm and sigint and cleanup 
containers before exit. In
+Jenkins there's not enough time between sigterm and sigkill so we guarantee 
that containers are not
+left running by propagating environment variables used by the Jenkins process 
tree killer to
+identify which process to kill when the job is stopped. This has the effect of 
stopping the
+container given that the process inside the container is terminated.
+
+How to test this is working propperly: On the console you can hit ^C while a 
container is running
+(not just building) and see that the container is stopped by running `docker 
ps` on another
+terminal. In Jenkins this has been tested by stopping the job which has 
containers running and
+verifying that the container stops shortly afterwards by running docker ps.
+
 ## Add a platform
 
 To add a platform, you should add the appropriate dockerfile in
diff --git a/ci/build.py b/ci/build.py
index f1a5e99..df9e97b 100755
--- a/ci/build.py
+++ b/ci/build.py
@@ -23,26 +23,67 @@
 """
 
 __author__ = 'Marco de Abreu, Kellen Sunderland, Anton Chernov, Pedro Larroy'
-__version__ = '0.2'
+__version__ = '0.3'
 
 import argparse
 import glob
 import logging
+import os
 import re
 import shutil
 import subprocess
 import sys
 import tempfile
-from copy import deepcopy
 from itertools import chain
-from subprocess import call, check_call, check_output
+from subprocess import check_call, check_output
 from typing import *
 from util import *
+import docker
+import docker.models
+import docker.errors
+import signal
+import atexit
 import pprint
-import requests
 
 
-CCACHE_MAXSIZE = '500G'
+class Cleanup:
+"""A class to cleanup containers"""
+def __init__(self):
+self.containers = set()
+self.docker_stop_timeout = 3
+
+def add_container(self, container: docker.models.containers.Container):
+assert isinstance(container, docker.models.containers.Container)
+self.containers.add(container)
+
+def remove_container(self, container: docker.models.containers.Container):
+assert isinstance(container, docker.models.containers.Container)
+self.containers.remove(container)
+
+def _cleanup_containers(self):
+if self.containers:
+logging.warning("Cleaning up containers")
+else:
+return
+# noinspection PyBroadException
+try:
+stop_timeout = int(os.environ.get("DOCKER_STOP_TIMEOUT", 
self.docker_stop_timeout))
+except Exception:
+stop_timeout = 3
+for container in self.containers:
+try:
+container.stop(timeout=stop_timeout)
+logging.info("☠: stopped container %s", 
trim_container_id(container.id))
+container.remove()
+logging.info(": removed container %s", 
trim_container_id(container.id))
+except Exception as e:
+logging.exception(e)
+self.containers.clear()
+logging.info("Cleaning up containers finished.")
+
+def __call__(self):
+"""Perform cleanup"""
+self._cleanup_containers()
 
 
 def get_dockerfiles_path():
@@ -115,7 +156,10 @@ def build_docker(platform: str, docker_binary: str, 
registry: str, num_retries:
 run_cmd()
 # Get image id by reading the tag. It's guaranteed (except race condition) 
that the tag exists. Otherwise, the
 # check_call would have failed
-return _get_local_image_id(docker_binary=docker_binary, docker_tag=tag)
+image_id = 

[GitHub] zheng-da commented on issue #12269: get memory error when running a model exported from gluon model zoo

2018-08-28 Thread GitBox
zheng-da commented on issue #12269: get memory error when running a model 
exported from gluon model zoo
URL: 
https://github.com/apache/incubator-mxnet/issues/12269#issuecomment-416707002
 
 
   Thanks. It seems mxnet should report an error instead of failing with a 
segfault.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da closed issue #12269: get memory error when running a model exported from gluon model zoo

2018-08-28 Thread GitBox
zheng-da closed issue #12269: get memory error when running a model exported 
from gluon model zoo
URL: https://github.com/apache/incubator-mxnet/issues/12269
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #12291: [MXNET-817] Fixes to speech recognition example

2018-08-28 Thread GitBox
vandanavk commented on issue #12291: [MXNET-817] Fixes to speech recognition 
example
URL: https://github.com/apache/incubator-mxnet/pull/12291#issuecomment-416704408
 
 
   Opened an issue https://github.com/apache/incubator-mxnet/issues/12384 for 
tracking the refactor


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk opened a new issue #12384: Refactor speech recognition example

2018-08-28 Thread GitBox
vandanavk opened a new issue #12384: Refactor speech recognition example
URL: https://github.com/apache/incubator-mxnet/issues/12384
 
 
   
   ## Description
   - Cleanup/simplify speech recognition example
   - Refactor singleton usage
   - Fix Python3 failures https://github.com/apache/incubator-mxnet/issues/11042
   
   
   Package used (Python/R/Scala/Julia):
   Python 2 and 3
   
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   6a7bfe905bafe94a33eccd0cb8bc42f0d667f606
   
   
   ## Related issues
   https://github.com/apache/incubator-mxnet/issues/12024
   PR https://github.com/apache/incubator-mxnet/pull/12291


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Roshrini commented on issue #12364: Importing PyTorch when using ONNX causes a segmentation fault

2018-08-28 Thread GitBox
Roshrini commented on issue #12364: Importing PyTorch when using ONNX causes a 
segmentation fault
URL: 
https://github.com/apache/incubator-mxnet/issues/12364#issuecomment-416702032
 
 
   @DatCorno Thanks for verfying this.
   @sandeep-krishnamurthy Can you please close this issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-08-28 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new cde9c38  Bump the publish timestamp.
cde9c38 is described below

commit cde9c38fb2c6d9d6f0c6e37dfa0956e7adb1b136
Author: mxnet-ci 
AuthorDate: Tue Aug 28 18:56:03 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..30d59a1
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Aug 28 18:56:03 UTC 2018



[GitHub] anirudh2290 commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
anirudh2290 commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213432161
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,774 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./subgraph_property.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+#define DEBUG_SUBGRAPH 0
+
+namespace sg {  // sg stands for subgraph
+
+struct SimpleNode;
+using SimpleNodePtr = std::shared_ptr;
+
+/*!
+ * \brief Node of the undirected graph which replicates the network structures
+ * of the computational graph. It is used to ease the graph traversal for 
finding
+ * subgraphs.
+ */
+struct SimpleNode {
+  static SimpleNodePtr Create() {
+return std::make_shared();
+  }
+  SimpleNode() : label(-1), node(nullptr) {}
+  /*! subgraph label */
+  int label;
+  /*! the original node in the computational graph it references*/
+  nnvm::Node* node;
+  /*!
+   * \brief output nodes of the current node
+   * key is node ptr and value is an array of indices standing for the entry 
indices
+   * in key->inputs whose source is the current node.
+   */
+  std::unordered_map> outputs;
+};  // struct SimpleNode
+
+#if DEBUG_SUBGRAPH
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that 

[GitHub] anirudh2290 commented on a change in pull request #12157: Subgraph API for integrating accelerators with MXNet

2018-08-28 Thread GitBox
anirudh2290 commented on a change in pull request #12157: Subgraph API for 
integrating accelerators with MXNet
URL: https://github.com/apache/incubator-mxnet/pull/12157#discussion_r213432803
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,774 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./subgraph_property.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+#define DEBUG_SUBGRAPH 0
+
+namespace sg {  // sg stands for subgraph
+
+struct SimpleNode;
+using SimpleNodePtr = std::shared_ptr;
+
+/*!
+ * \brief Node of the undirected graph which replicates the network structures
+ * of the computational graph. It is used to ease the graph traversal for 
finding
+ * subgraphs.
+ */
+struct SimpleNode {
+  static SimpleNodePtr Create() {
+return std::make_shared();
+  }
+  SimpleNode() : label(-1), node(nullptr) {}
+  /*! subgraph label */
+  int label;
+  /*! the original node in the computational graph it references*/
+  nnvm::Node* node;
+  /*!
+   * \brief output nodes of the current node
+   * key is node ptr and value is an array of indices standing for the entry 
indices
+   * in key->inputs whose source is the current node.
+   */
+  std::unordered_map> outputs;
+};  // struct SimpleNode
+
+#if DEBUG_SUBGRAPH
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that 

[GitHub] yzhliu closed issue #9745: MultiIter example should also return pad in next(self)

2018-08-28 Thread GitBox
yzhliu closed issue #9745: MultiIter example should also return pad in 
next(self)
URL: https://github.com/apache/incubator-mxnet/issues/9745
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #12291: [MXNET-817] Fixes to speech recognition example

2018-08-28 Thread GitBox
anirudhacharya commented on issue #12291: [MXNET-817] Fixes to speech 
recognition example
URL: https://github.com/apache/incubator-mxnet/pull/12291#issuecomment-416700409
 
 
   LGTM for most parts. Is the singleton refactor being tracked anywhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vandanavk commented on issue #12291: [MXNET-817] Fixes to speech recognition example

2018-08-28 Thread GitBox
vandanavk commented on issue #12291: [MXNET-817] Fixes to speech recognition 
example
URL: https://github.com/apache/incubator-mxnet/pull/12291#issuecomment-416697526
 
 
   Is this PR good to go?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sad- commented on issue #9745: MultiIter example should also return pad in next(self)

2018-08-28 Thread GitBox
sad- commented on issue #9745: MultiIter example should also return pad in 
next(self)
URL: 
https://github.com/apache/incubator-mxnet/issues/9745#issuecomment-416692175
 
 
   @yizhi can we close this out? doc example has been fixed/made more explicit 
on how to write a custom DataIter here: 
https://mxnet.incubator.apache.org/tutorials/basic/data.html#custom-iterator


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanieYuan commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
StephanieYuan commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213422213
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_optimizer.py
 ##
 @@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGOptimizer` encapsulates two optimizers to accommodate SVRG 
optimization logic.
+"""
+
+
+import mxnet as mx
+
+
+@mx.optimizer.register
+class AssignmentOptimizer(mx.optimizer.Optimizer):
+def update(self, index, weight, grad, state):
+weight[:] = grad
+
+@mx.optimizer.register
+class SVRGOptimizer(mx.optimizer.Optimizer):
+"""SVRGOptimizer is a wrapper class for two optimizers: one for 
accumulating full gradients and the other
 
 Review comment:
   For the previous benchmarks SVRG is used with SGD. Meanwhile I did some 
exploration of using NAG + SVRG and Adam + SVRG. I think it will be valuable to 
benchmark those optimizers with SVRG too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] StephanieYuan commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
StephanieYuan commented on a change in pull request #12376: [MXNET-854] SVRG 
Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213422213
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_optimizer.py
 ##
 @@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGOptimizer` encapsulates two optimizers to accommodate SVRG 
optimization logic.
+"""
+
+
+import mxnet as mx
+
+
+@mx.optimizer.register
+class AssignmentOptimizer(mx.optimizer.Optimizer):
+def update(self, index, weight, grad, state):
+weight[:] = grad
+
+@mx.optimizer.register
+class SVRGOptimizer(mx.optimizer.Optimizer):
+"""SVRGOptimizer is a wrapper class for two optimizers: one for 
accumulating full gradients and the other
 
 Review comment:
   For the previous benchmarks SVRG is used with SGD, I did some exploration of 
using NAG + SVRG and Adam + SVRG. I think it will be valuable to benchmark 
those optimizers with SVRG too. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213415709
 
 

 ##
 File path: contrib/svrg_optimization_python/test_svrg_train.py
 ##
 @@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+from src.svrg_module import SVRGModule
+
+
+def test_svrg_intermediate_level_api(num_epoch):
+"""Test intermediate level svrgmodule API where the training process
+need to be explicitly defined. KVstore is not explicitly created.
+"""
+di, mod = create_network()
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+kv = mx.kv.create("local")
+mod.init_optimizer(kvstore=kv, optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq) == 0:
+mod.update_full_grads(di)
+di.reset()
+for batch in di:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+print('Epoch[%d] Time cost=%.3f', e, metrics.get())
 
 Review comment:
   Please use logging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213412718
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_module.py
 ##
 @@ -0,0 +1,581 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGModule` implements the `Module` API by wrapping an auxiliary module 
to perform
+SVRG optimization logic.
+"""
+
+import mxnet as mx
+import time
+import logging
+from svrg_optimizer import SVRGOptimizer
+from mxnet.module import Module
+
+
+class SVRGModule(Module):
+"""SVRGModule is a module that encapsulates two Modules to accommodate the 
SVRG optimization technique.
+It is functionally the same as Module API, except it is implemented using 
SVRG optimization logic.
+
+Parameters
+--
+symbol : Symbol
+data_names : list of str
+Defaults to `('data')` for a typical model used in image 
classification.
+label_names : list of str
+Defaults to `('softmax_label')` for a typical model used in image
+classification.
+logger : Logger
+Defaults to `logging`.
+context : Context or list of Context
+Defaults to ``mx.cpu()``.
+work_load_list : list of number
+Default ``None``, indicating uniform workload.
+fixed_param_names: list of str
+Default ``None``, indicating no network parameters are fixed.
+state_names : list of str
+states are similar to data and label, but not provided by data 
iterator.
+Instead they are initialized to 0 and can be set by `set_states()`.
+group2ctxs : dict of str to context or list of context,
+ or list of dict of str to context
+Default is `None`. Mapping the `ctx_group` attribute to the context 
assignment.
+compression_params : dict
+Specifies type of gradient compression and additional arguments 
depending
+on the type of compression being used. For example, 2bit compression 
requires a threshold.
+Arguments would then be {'type':'2bit', 'threshold':0.5}
+See mxnet.KVStore.set_gradient_compression method for more details on 
gradient compression.
+update_freq: int
+Specifies the number of times to update the full gradients to be used 
in the SVRG optimization. For instance,
+update_freq = 2 will calculates the gradients over all data every two 
epochs
+Examples
+
+>>> # An example of declaring and using SVRGModule.
+>>> mod = mod = SVRGModule(symbol=lro, data_names=['data'], 
label_names=['lin_reg_label'], update_freq=2)
+>>> mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),),
+>>> num_epoch=num_epoch, kvstore='local')
+"""
+
+def __init__(self, symbol, data_names=('data',), 
label_names=('softmax_label',),
+ logger=logging, context=mx.cpu(), work_load_list=None,
+ fixed_param_names=None, state_names=None, group2ctxs=None,
+ compression_params=None, update_freq=None):
+super(SVRGModule, self).__init__(symbol, data_names=data_names, 
label_names=label_names, logger=logger,
+ context=context, 
work_load_list=work_load_list,
+ fixed_param_names=fixed_param_names, 
state_names=state_names,
+ group2ctxs=group2ctxs, 
compression_params=compression_params)
+
+# Type check update_frequency
+if isinstance(update_freq, int):
+self.update_freq = update_freq
+else:
+raise TypeError("update_freq must be an integer")
+
+self._mod_aux = mx.mod.Module(symbol, data_names, label_names, logger, 
context, work_load_list,
+  fixed_param_names, state_names, 
group2ctxs, compression_params)
+
+self._param_dict = [{} for ctx in self._context]
+
+def _reset_bind(self):
+"""Internal function to reset binded state."""
+super(SVRGModule, self)._reset_bind()
+self._mod_aux._reset_bind()
+
+
+def reshape(self, 

[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213406997
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_module.py
 ##
 @@ -0,0 +1,581 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGModule` implements the `Module` API by wrapping an auxiliary module 
to perform
+SVRG optimization logic.
+"""
+
+import mxnet as mx
+import time
+import logging
+from svrg_optimizer import SVRGOptimizer
+from mxnet.module import Module
+
+
+class SVRGModule(Module):
+"""SVRGModule is a module that encapsulates two Modules to accommodate the 
SVRG optimization technique.
+It is functionally the same as Module API, except it is implemented using 
SVRG optimization logic.
+
+Parameters
+--
+symbol : Symbol
+data_names : list of str
+Defaults to `('data')` for a typical model used in image 
classification.
+label_names : list of str
+Defaults to `('softmax_label')` for a typical model used in image
+classification.
+logger : Logger
+Defaults to `logging`.
+context : Context or list of Context
+Defaults to ``mx.cpu()``.
+work_load_list : list of number
+Default ``None``, indicating uniform workload.
+fixed_param_names: list of str
+Default ``None``, indicating no network parameters are fixed.
+state_names : list of str
+states are similar to data and label, but not provided by data 
iterator.
+Instead they are initialized to 0 and can be set by `set_states()`.
+group2ctxs : dict of str to context or list of context,
+ or list of dict of str to context
+Default is `None`. Mapping the `ctx_group` attribute to the context 
assignment.
+compression_params : dict
+Specifies type of gradient compression and additional arguments 
depending
+on the type of compression being used. For example, 2bit compression 
requires a threshold.
+Arguments would then be {'type':'2bit', 'threshold':0.5}
+See mxnet.KVStore.set_gradient_compression method for more details on 
gradient compression.
+update_freq: int
+Specifies the number of times to update the full gradients to be used 
in the SVRG optimization. For instance,
+update_freq = 2 will calculates the gradients over all data every two 
epochs
+Examples
+
+>>> # An example of declaring and using SVRGModule.
+>>> mod = mod = SVRGModule(symbol=lro, data_names=['data'], 
label_names=['lin_reg_label'], update_freq=2)
+>>> mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),),
+>>> num_epoch=num_epoch, kvstore='local')
+"""
+
+def __init__(self, symbol, data_names=('data',), 
label_names=('softmax_label',),
+ logger=logging, context=mx.cpu(), work_load_list=None,
+ fixed_param_names=None, state_names=None, group2ctxs=None,
+ compression_params=None, update_freq=None):
+super(SVRGModule, self).__init__(symbol, data_names=data_names, 
label_names=label_names, logger=logger,
+ context=context, 
work_load_list=work_load_list,
+ fixed_param_names=fixed_param_names, 
state_names=state_names,
+ group2ctxs=group2ctxs, 
compression_params=compression_params)
+
+# Type check update_frequency
+if isinstance(update_freq, int):
+self.update_freq = update_freq
+else:
+raise TypeError("update_freq must be an integer")
+
+self._mod_aux = mx.mod.Module(symbol, data_names, label_names, logger, 
context, work_load_list,
+  fixed_param_names, state_names, 
group2ctxs, compression_params)
+
+self._param_dict = [{} for ctx in self._context]
+
+def _reset_bind(self):
+"""Internal function to reset binded state."""
+super(SVRGModule, self)._reset_bind()
+self._mod_aux._reset_bind()
+
 
 Review comment:

[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213413094
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_optimizer.py
 ##
 @@ -0,0 +1,131 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGOptimizer` encapsulates two optimizers to accommodate SVRG 
optimization logic.
+"""
+
+
+import mxnet as mx
+
+
+@mx.optimizer.register
+class AssignmentOptimizer(mx.optimizer.Optimizer):
+def update(self, index, weight, grad, state):
+weight[:] = grad
+
+@mx.optimizer.register
+class SVRGOptimizer(mx.optimizer.Optimizer):
+"""SVRGOptimizer is a wrapper class for two optimizers: one for 
accumulating full gradients and the other
 
 Review comment:
   This is mainly useful with SGD right? Why allow any optimizer to be passed 
in here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213418335
 
 

 ##
 File path: contrib/svrg_optimization_python/test_svrg_train.py
 ##
 @@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+from src.svrg_module import SVRGModule
+
+
+def test_svrg_intermediate_level_api(num_epoch):
+"""Test intermediate level svrgmodule API where the training process
+need to be explicitly defined. KVstore is not explicitly created.
+"""
+di, mod = create_network()
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+kv = mx.kv.create("local")
+mod.init_optimizer(kvstore=kv, optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq) == 0:
+mod.update_full_grads(di)
+di.reset()
+for batch in di:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+print('Epoch[%d] Time cost=%.3f', e, metrics.get())
+
+
+def test_svrg_high_level_api(num_epoch):
+"""Test high level svrgmodule API. KVStore is explicitly created.
+"""
+di, mod = create_network()
+mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),), num_epoch=num_epoch,
+kvstore='local')
+
+
+def create_network():
+"""Create a linear regression network for performing SVRG optimization.
+:return: an instance of mx.io.NDArrayIter
+:return: an instance of mx.mod.svrgmodule for performing SVRG optimization
+"""
+mx.random.seed(42)
+train_data = np.random.randint(1, 5, [1000, 2])
+weights = np.array([1.0, 2.0])
+train_label = train_data.dot(weights)
+
+di = mx.io.NDArrayIter(train_data, train_label, batch_size=32, 
shuffle=True, label_name='lin_reg_label')
+X = mx.sym.Variable('data')
+Y = mx.symbol.Variable('lin_reg_label')
+fully_connected_layer = mx.sym.FullyConnected(data=X, name='fc1', 
num_hidden=1)
+lro = mx.sym.LinearRegressionOutput(data=fully_connected_layer, label=Y, 
name="lro")
+
+mod = SVRGModule(
+symbol=lro,
+data_names=['data'],
+label_names=['lin_reg_label'], update_freq=2
+)
+
+return di, mod
+
+# run as a script
+if __name__ == "__main__":
 
 Review comment:
   should it be? Please see other tests.
   
   ```
   if __name__ == '__main__':
   import nose
   nose.runmodule()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213417944
 
 

 ##
 File path: contrib/svrg_optimization_python/test_svrg_train.py
 ##
 @@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+import mxnet as mx
+import numpy as np
+from src.svrg_module import SVRGModule
+
+
+def test_svrg_intermediate_level_api(num_epoch):
+"""Test intermediate level svrgmodule API where the training process
+need to be explicitly defined. KVstore is not explicitly created.
+"""
+di, mod = create_network()
+mod.bind(data_shapes=di.provide_data, label_shapes=di.provide_label)
+mod.init_params(initializer=mx.init.Uniform(0.01), allow_missing=False, 
force_init=False, allow_extra=False)
+kv = mx.kv.create("local")
+mod.init_optimizer(kvstore=kv, optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),))
+metrics = mx.metric.create("mse")
+for e in range(num_epoch):
+metrics.reset()
+if e % (mod.update_freq) == 0:
+mod.update_full_grads(di)
+di.reset()
+for batch in di:
+mod.forward_backward(data_batch=batch)
+mod.update()
+mod.update_metric(metrics, batch.label)
+print('Epoch[%d] Time cost=%.3f', e, metrics.get())
+
+
+def test_svrg_high_level_api(num_epoch):
+"""Test high level svrgmodule API. KVStore is explicitly created.
+"""
+di, mod = create_network()
+mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),), num_epoch=num_epoch,
+kvstore='local')
+
+
+def create_network():
+"""Create a linear regression network for performing SVRG optimization.
+:return: an instance of mx.io.NDArrayIter
+:return: an instance of mx.mod.svrgmodule for performing SVRG optimization
+"""
+mx.random.seed(42)
 
 Review comment:
   1. Please do not use fixed seed. You can see other tests under tests/
   2. Also, what are we asserting here? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213406018
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_module.py
 ##
 @@ -0,0 +1,581 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGModule` implements the `Module` API by wrapping an auxiliary module 
to perform
+SVRG optimization logic.
+"""
+
+import mxnet as mx
+import time
+import logging
+from svrg_optimizer import SVRGOptimizer
+from mxnet.module import Module
+
+
+class SVRGModule(Module):
+"""SVRGModule is a module that encapsulates two Modules to accommodate the 
SVRG optimization technique.
+It is functionally the same as Module API, except it is implemented using 
SVRG optimization logic.
+
+Parameters
+--
+symbol : Symbol
+data_names : list of str
+Defaults to `('data')` for a typical model used in image 
classification.
+label_names : list of str
+Defaults to `('softmax_label')` for a typical model used in image
+classification.
+logger : Logger
+Defaults to `logging`.
+context : Context or list of Context
+Defaults to ``mx.cpu()``.
+work_load_list : list of number
+Default ``None``, indicating uniform workload.
+fixed_param_names: list of str
+Default ``None``, indicating no network parameters are fixed.
+state_names : list of str
+states are similar to data and label, but not provided by data 
iterator.
+Instead they are initialized to 0 and can be set by `set_states()`.
+group2ctxs : dict of str to context or list of context,
+ or list of dict of str to context
+Default is `None`. Mapping the `ctx_group` attribute to the context 
assignment.
+compression_params : dict
+Specifies type of gradient compression and additional arguments 
depending
+on the type of compression being used. For example, 2bit compression 
requires a threshold.
+Arguments would then be {'type':'2bit', 'threshold':0.5}
+See mxnet.KVStore.set_gradient_compression method for more details on 
gradient compression.
+update_freq: int
+Specifies the number of times to update the full gradients to be used 
in the SVRG optimization. For instance,
+update_freq = 2 will calculates the gradients over all data every two 
epochs
+Examples
+
+>>> # An example of declaring and using SVRGModule.
+>>> mod = mod = SVRGModule(symbol=lro, data_names=['data'], 
label_names=['lin_reg_label'], update_freq=2)
 
 Review comment:
   Please correct this example.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213414138
 
 

 ##
 File path: contrib/svrg_optimization_python/test_svrg_train.py
 ##
 @@ -0,0 +1,84 @@
+# Licensed to the Apache Software Foundation (ASF) under one
 
 Review comment:
   tests should go tests. May be like below?
   1. tests/python/train/test_contrib_svrg.py : Full end to end train and 
testing. See test_mlp.py as an example. Please, note, it should be faster (< 2 
min at max?). If it is longer than that, we need to move it to nightly test.
   2. tests/python/unittest/test_contrib_svrgmodule.py / ..._svrgoptimizer.py 
etc..


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #12376: [MXNET-854] 
SVRG Optimization in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#discussion_r213406770
 
 

 ##
 File path: contrib/svrg_optimization_python/src/svrg_module.py
 ##
 @@ -0,0 +1,581 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""A `SVRGModule` implements the `Module` API by wrapping an auxiliary module 
to perform
+SVRG optimization logic.
+"""
+
+import mxnet as mx
+import time
+import logging
+from svrg_optimizer import SVRGOptimizer
+from mxnet.module import Module
+
+
+class SVRGModule(Module):
+"""SVRGModule is a module that encapsulates two Modules to accommodate the 
SVRG optimization technique.
+It is functionally the same as Module API, except it is implemented using 
SVRG optimization logic.
+
+Parameters
+--
+symbol : Symbol
+data_names : list of str
+Defaults to `('data')` for a typical model used in image 
classification.
+label_names : list of str
+Defaults to `('softmax_label')` for a typical model used in image
+classification.
+logger : Logger
+Defaults to `logging`.
+context : Context or list of Context
+Defaults to ``mx.cpu()``.
+work_load_list : list of number
+Default ``None``, indicating uniform workload.
+fixed_param_names: list of str
+Default ``None``, indicating no network parameters are fixed.
+state_names : list of str
+states are similar to data and label, but not provided by data 
iterator.
+Instead they are initialized to 0 and can be set by `set_states()`.
+group2ctxs : dict of str to context or list of context,
+ or list of dict of str to context
+Default is `None`. Mapping the `ctx_group` attribute to the context 
assignment.
+compression_params : dict
+Specifies type of gradient compression and additional arguments 
depending
+on the type of compression being used. For example, 2bit compression 
requires a threshold.
+Arguments would then be {'type':'2bit', 'threshold':0.5}
+See mxnet.KVStore.set_gradient_compression method for more details on 
gradient compression.
+update_freq: int
+Specifies the number of times to update the full gradients to be used 
in the SVRG optimization. For instance,
+update_freq = 2 will calculates the gradients over all data every two 
epochs
+Examples
+
+>>> # An example of declaring and using SVRGModule.
+>>> mod = mod = SVRGModule(symbol=lro, data_names=['data'], 
label_names=['lin_reg_label'], update_freq=2)
+>>> mod.fit(di, eval_metric='mse', optimizer='sgd', 
optimizer_params=(('learning_rate', 0.025),),
+>>> num_epoch=num_epoch, kvstore='local')
+"""
+
+def __init__(self, symbol, data_names=('data',), 
label_names=('softmax_label',),
+ logger=logging, context=mx.cpu(), work_load_list=None,
+ fixed_param_names=None, state_names=None, group2ctxs=None,
+ compression_params=None, update_freq=None):
+super(SVRGModule, self).__init__(symbol, data_names=data_names, 
label_names=label_names, logger=logger,
+ context=context, 
work_load_list=work_load_list,
+ fixed_param_names=fixed_param_names, 
state_names=state_names,
+ group2ctxs=group2ctxs, 
compression_params=compression_params)
+
+# Type check update_frequency
+if isinstance(update_freq, int):
+self.update_freq = update_freq
+else:
+raise TypeError("update_freq must be an integer")
 
 Review comment:
   Can you make error more useful for user to understand the issue. Example: 
(update_freq in SVRGModule must be an integer. Example: 2. Given update_freq = 
', update_freq)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact 

[GitHub] sandeep-krishnamurthy commented on issue #12376: [MXNET-854] SVRG Optimization in Python Module API

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on issue #12376: [MXNET-854] SVRG Optimization 
in Python Module API
URL: https://github.com/apache/incubator-mxnet/pull/12376#issuecomment-416673829
 
 
   Thanks @StephanieYuan - Welcome to MXNet community!
   
   @piiswrong @eric-haibin-lin @anirudhacharya @Roshrini @vandanavk - You will 
be interested to have a look at this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on issue #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on issue #11952: [MXNET-707] Add unit test for mxnet to 
coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#issuecomment-416673642
 
 
   @Roshrini Addressed all your comments. Please kindly review again. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213404931
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import unittest
+import mxnet as mx
+import numpy as np
+
+from converter._mxnet_converter import convert
+from collections import namedtuple
+from converter import utils
+
+def _mxnet_remove_batch(input_data):
+for blob in input_data:
+input_data[blob] = np.reshape(input_data[blob], 
input_data[blob].shape[1:])
+return input_data
+
+
+def _get_mxnet_module(net, data_shapes, mode, label_names, input_names=None):
+""" Given a symbolic graph, input shape and the initialization mode,
+returns an MXNet module.
+"""
+mx.random.seed(1993)
+
+mod = utils.create_module(sym=net, data_shapes=data_shapes, 
label_shapes=input_names, label_names=label_names)
+
+if mode == 'random':
+mod.init_params(
+initializer=mx.init.Uniform(scale=.1)
+)
+elif mode == 'zeros':
+mod.init_params(
+initializer=mx.init.Zero()
+)
+elif mode == 'ones':
+mod.init_params(
+initializer=mx.init.One()
+)
+else:
+Exception(KeyError("%s is not a valid initialization mode" % mode))
+
+return mod
+
+
+class SingleLayerTest(unittest.TestCase):
+"""
+Unit test class for testing where converter is able to convert individual 
layers or not.
+In order to do so, it converts model and generates preds on both CoreML 
and MXNet and check they are the same.
+"""
+def _test_mxnet_model(self, net, input_shape, mode, class_labels=None, 
coreml_mode=None, label_names=None, delta=1e-3,
+  pre_processing_args=None, input_name='data'):
+""" Helper method that convert the CoreML model into CoreML and 
compares the predictions over random data.
+
+Parameters
+--
+net: MXNet Symbol Graph
+The graph that we'll be converting into CoreML.
+
+input_shape: tuple of ints
+The shape of input data. Generally of the format (batch-size, 
channels, height, width)
+
+mode: (random|zeros|ones)
+The mode to use in order to set the parameters (weights and 
biases).
+
+label_names: list of strings
+The names of the output labels. Default: None
+
+delta: float
+The maximum difference b/w predictions of MXNet and CoreML that is 
tolerable.
+
+input_name: str
+The name of the input variable to the symbolic graph.
+"""
+
+data_shapes=[(input_name, input_shape)]
+
+mod = _get_mxnet_module(net, data_shapes, mode, label_names)
+
+# Generate some dummy data
+input_data = {input_name: np.random.uniform(-10., 10., input_shape)}
+Batch = namedtuple('Batch', ['data'])
+mod.forward(Batch([mx.nd.array(input_data[input_name])]))
+mxnet_preds = mod.get_outputs()[0].asnumpy().flatten()
+
+# Get predictions from coreml
+coreml_model = convert(
+model=mod,
+class_labels=class_labels,
+mode=coreml_mode,
+input_shape={input_name: input_shape},
+preprocessor_args=pre_processing_args
+)
+
+def test_tiny_inner_product_zero_input(self):
+np.random.seed(1988)
+input_shape = (1, 10)
+net = mx.sym.Variable('data')
+net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=5)
+self._test_mxnet_model(net, input_shape=input_shape, mode='zeros')
+
+def test_really_tiny_inner_product_ones_input(self):
+np.random.seed(1988)
+input_shape = (1, 1)
+net = mx.sym.Variable('data')
+net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=1)
+self._test_mxnet_model(net, input_shape=input_shape, mode='ones')
+
+def test_really_tiny_2_inner_product_ones_input(self):
+np.random.seed(1988)
+input_shape = (1, 1)
+net = 

[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213404849
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import unittest
+import mxnet as mx
+import numpy as np
+
+from converter._mxnet_converter import convert
+from collections import namedtuple
+from converter import utils
+
+def _mxnet_remove_batch(input_data):
+for blob in input_data:
+input_data[blob] = np.reshape(input_data[blob], 
input_data[blob].shape[1:])
+return input_data
+
+
+def _get_mxnet_module(net, data_shapes, mode, label_names, input_names=None):
+""" Given a symbolic graph, input shape and the initialization mode,
+returns an MXNet module.
+"""
+mx.random.seed(1993)
+
+mod = utils.create_module(sym=net, data_shapes=data_shapes, 
label_shapes=input_names, label_names=label_names)
+
+if mode == 'random':
+mod.init_params(
+initializer=mx.init.Uniform(scale=.1)
+)
+elif mode == 'zeros':
+mod.init_params(
+initializer=mx.init.Zero()
+)
+elif mode == 'ones':
+mod.init_params(
+initializer=mx.init.One()
+)
+else:
+Exception(KeyError("%s is not a valid initialization mode" % mode))
+
+return mod
+
+
+class SingleLayerTest(unittest.TestCase):
+"""
+Unit test class for testing where converter is able to convert individual 
layers or not.
+In order to do so, it converts model and generates preds on both CoreML 
and MXNet and check they are the same.
+"""
+def _test_mxnet_model(self, net, input_shape, mode, class_labels=None, 
coreml_mode=None, label_names=None, delta=1e-3,
+  pre_processing_args=None, input_name='data'):
+""" Helper method that convert the CoreML model into CoreML and 
compares the predictions over random data.
+
+Parameters
+--
+net: MXNet Symbol Graph
+The graph that we'll be converting into CoreML.
+
+input_shape: tuple of ints
+The shape of input data. Generally of the format (batch-size, 
channels, height, width)
+
+mode: (random|zeros|ones)
+The mode to use in order to set the parameters (weights and 
biases).
+
+label_names: list of strings
+The names of the output labels. Default: None
+
+delta: float
+The maximum difference b/w predictions of MXNet and CoreML that is 
tolerable.
+
+input_name: str
+The name of the input variable to the symbolic graph.
+"""
+
+data_shapes=[(input_name, input_shape)]
+
+mod = _get_mxnet_module(net, data_shapes, mode, label_names)
+
+# Generate some dummy data
+input_data = {input_name: np.random.uniform(-10., 10., input_shape)}
+Batch = namedtuple('Batch', ['data'])
+mod.forward(Batch([mx.nd.array(input_data[input_name])]))
+mxnet_preds = mod.get_outputs()[0].asnumpy().flatten()
+
+# Get predictions from coreml
+coreml_model = convert(
+model=mod,
+class_labels=class_labels,
+mode=coreml_mode,
+input_shape={input_name: input_shape},
+preprocessor_args=pre_processing_args
+)
+
+def test_tiny_inner_product_zero_input(self):
+np.random.seed(1988)
+input_shape = (1, 10)
+net = mx.sym.Variable('data')
+net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=5)
+self._test_mxnet_model(net, input_shape=input_shape, mode='zeros')
+
+def test_really_tiny_inner_product_ones_input(self):
 
 Review comment:
   Added for all


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor edited a comment on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
vishaalkapoor edited a comment on issue #12363: distributed training notebook 
tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416670681
 
 
   @sandeep-krishnamurthy It looks like the tutorial/notebook executor will 
need to be wrapped by a launcher script that executes notebooks in parallel on 
several nodes. 
   
   I modified the existing tutorial/notebook executor so that it could be used 
for more than tutorials (arbitrary notebooks) on a single-host. There is no 
scaffolding for multi-host. 
https://github.com/apache/incubator-mxnet/blob/ae5d60fa830090f4882a433d9b88c53c26c42b4f/tests/utils/notebook_test/__init__.py#L39
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vishaalkapoor commented on issue #12363: distributed training notebook tests

2018-08-28 Thread GitBox
vishaalkapoor commented on issue #12363: distributed training notebook tests
URL: 
https://github.com/apache/incubator-mxnet/issues/12363#issuecomment-416670681
 
 
   @sandeep-krishnamurthy It looks like the tutorial/notebook executor will 
need to be wrapped by a launcher script that executes notebooks in parallel on 
several nodes. 
   
   I modified the existing tutorial/notebook executor so that it could be used 
for more than tutorials on a single-host. There is no scaffolding for 
multi-host. 
https://github.com/apache/incubator-mxnet/blob/ae5d60fa830090f4882a433d9b88c53c26c42b4f/tests/utils/notebook_test/__init__.py#L39
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on issue #11890: Error During Make - undefined reference

2018-08-28 Thread GitBox
sandeep-krishnamurthy commented on issue #11890: Error During Make - undefined 
reference
URL: 
https://github.com/apache/incubator-mxnet/issues/11890#issuecomment-416670461
 
 
   @liangxi627 - can you please give more details? Is it due to version 
mis-match of OpenCV?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213400743
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import unittest
+import mxnet as mx
+import numpy as np
+
+from converter._mxnet_converter import convert
+from collections import namedtuple
+from converter import utils
+
+def _mxnet_remove_batch(input_data):
+for blob in input_data:
+input_data[blob] = np.reshape(input_data[blob], 
input_data[blob].shape[1:])
+return input_data
+
+
+def _get_mxnet_module(net, data_shapes, mode, label_names, input_names=None):
+""" Given a symbolic graph, input shape and the initialization mode,
+returns an MXNet module.
+"""
+mx.random.seed(1993)
+
+mod = utils.create_module(sym=net, data_shapes=data_shapes, 
label_shapes=input_names, label_names=label_names)
+
+if mode == 'random':
+mod.init_params(
+initializer=mx.init.Uniform(scale=.1)
+)
+elif mode == 'zeros':
+mod.init_params(
+initializer=mx.init.Zero()
+)
+elif mode == 'ones':
+mod.init_params(
+initializer=mx.init.One()
+)
+else:
+Exception(KeyError("%s is not a valid initialization mode" % mode))
+
+return mod
+
+
+class SingleLayerTest(unittest.TestCase):
+"""
+Unit test class for testing where converter is able to convert individual 
layers or not.
+In order to do so, it converts model and generates preds on both CoreML 
and MXNet and check they are the same.
+"""
+def _test_mxnet_model(self, net, input_shape, mode, class_labels=None, 
coreml_mode=None, label_names=None, delta=1e-3,
+  pre_processing_args=None, input_name='data'):
+""" Helper method that convert the CoreML model into CoreML and 
compares the predictions over random data.
+
+Parameters
+--
+net: MXNet Symbol Graph
+The graph that we'll be converting into CoreML.
+
+input_shape: tuple of ints
+The shape of input data. Generally of the format (batch-size, 
channels, height, width)
+
+mode: (random|zeros|ones)
+The mode to use in order to set the parameters (weights and 
biases).
+
+label_names: list of strings
+The names of the output labels. Default: None
+
+delta: float
+The maximum difference b/w predictions of MXNet and CoreML that is 
tolerable.
+
+input_name: str
+The name of the input variable to the symbolic graph.
+"""
+
+data_shapes=[(input_name, input_shape)]
+
+mod = _get_mxnet_module(net, data_shapes, mode, label_names)
+
+# Generate some dummy data
+input_data = {input_name: np.random.uniform(-10., 10., input_shape)}
+Batch = namedtuple('Batch', ['data'])
+mod.forward(Batch([mx.nd.array(input_data[input_name])]))
+mxnet_preds = mod.get_outputs()[0].asnumpy().flatten()
+
+# Get predictions from coreml
+coreml_model = convert(
+model=mod,
+class_labels=class_labels,
+mode=coreml_mode,
+input_shape={input_name: input_shape},
+preprocessor_args=pre_processing_args
+)
+
+def test_tiny_inner_product_zero_input(self):
+np.random.seed(1988)
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213400631
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import unittest
+import mxnet as mx
+import numpy as np
+
+from converter._mxnet_converter import convert
+from collections import namedtuple
+from converter import utils
+
+def _mxnet_remove_batch(input_data):
+for blob in input_data:
+input_data[blob] = np.reshape(input_data[blob], 
input_data[blob].shape[1:])
+return input_data
+
+
+def _get_mxnet_module(net, data_shapes, mode, label_names, input_names=None):
+""" Given a symbolic graph, input shape and the initialization mode,
+returns an MXNet module.
+"""
+mx.random.seed(1993)
+
+mod = utils.create_module(sym=net, data_shapes=data_shapes, 
label_shapes=input_names, label_names=label_names)
+
+if mode == 'random':
+mod.init_params(
+initializer=mx.init.Uniform(scale=.1)
+)
+elif mode == 'zeros':
+mod.init_params(
+initializer=mx.init.Zero()
+)
+elif mode == 'ones':
+mod.init_params(
+initializer=mx.init.One()
+)
+else:
+Exception(KeyError("%s is not a valid initialization mode" % mode))
+
+return mod
+
+
+class SingleLayerTest(unittest.TestCase):
+"""
+Unit test class for testing where converter is able to convert individual 
layers or not.
 
 Review comment:
   Thanks. Corrected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213400352
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import unittest
+import mxnet as mx
+import numpy as np
+
+from converter._mxnet_converter import convert
+from collections import namedtuple
+from converter import utils
+
+def _mxnet_remove_batch(input_data):
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11952: [MXNET-707] Add unit test for mxnet to coreml converter

2018-08-28 Thread GitBox
apeforest commented on a change in pull request #11952: [MXNET-707] Add unit 
test for mxnet to coreml converter
URL: https://github.com/apache/incubator-mxnet/pull/11952#discussion_r213400138
 
 

 ##
 File path: tools/coreml/unittest/test_converter_no_pred.py
 ##
 @@ -0,0 +1,970 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
 
 Review comment:
   What docstring? This seems to be the standard header for all other python 
unittest. Please clarify. Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >