[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken

2019-03-16 Thread GitBox
eric-haibin-lin commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken
URL: 
https://github.com/apache/incubator-mxnet/issues/14383#issuecomment-473622044
 
 
   related PR https://github.com/apache/incubator-mxnet/pull/11472


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken

2019-03-16 Thread GitBox
eric-haibin-lin commented on issue #14383: MXNET_BACKWARD_DO_MIRROR is broken
URL: 
https://github.com/apache/incubator-mxnet/issues/14383#issuecomment-473622011
 
 
   Contribution is welcome! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #14434: could i set multi kv_store in distribute training program?

2019-03-16 Thread GitBox
eric-haibin-lin commented on issue #14434: could  i set multi kv_store in 
distribute training program?
URL: 
https://github.com/apache/incubator-mxnet/issues/14434#issuecomment-473621985
 
 
   Not currently supported. what is your use case?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Correct update count with Gluon trainer and update_on_kvstore=False (#14377)

2019-03-16 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 63ed258  Correct update count with Gluon trainer and 
update_on_kvstore=False (#14377)
63ed258 is described below

commit 63ed258063137421e6d4def30435014ab57fb468
Author: Przemyslaw Tredak 
AuthorDate: Sat Mar 16 23:48:30 2019 -0700

Correct update count with Gluon trainer and update_on_kvstore=False (#14377)

* LRScheduler with update_on_kvstore=False

* Cleaning trainer.py

* Retrigger CI

* Fixes from review
---
 python/mxnet/gluon/trainer.py   |  4 
 python/mxnet/optimizer/optimizer.py | 17 -
 tests/python/unittest/test_gluon_trainer.py | 21 -
 3 files changed, 28 insertions(+), 14 deletions(-)

diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 8060f38..45a44d8 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -241,10 +241,6 @@ class Trainer(object):
 kvstore.set_optimizer(self._optimizer)
 self._kvstore = kvstore
 self._update_on_kvstore = update_on_kvstore
-if self._optimizer.lr_scheduler and not self._update_on_kvstore:
-raise ValueError("update_on_kvstore=False does not support " \
- "optimizer with LRScheduler. Please " \
- "consider setting learning rate manually.")
 else:
 self._kvstore = None
 self._update_on_kvstore = None
diff --git a/python/mxnet/optimizer/optimizer.py 
b/python/mxnet/optimizer/optimizer.py
index def2c95..2e7fe86 100644
--- a/python/mxnet/optimizer/optimizer.py
+++ b/python/mxnet/optimizer/optimizer.py
@@ -106,7 +106,8 @@ class Optimizer(object):
 self.wd_mult = {}
 self.begin_num_update = begin_num_update
 self.num_update = begin_num_update
-self._index_update_count = {}
+self._all_index_update_counts = {0 : {}}
+self._index_update_count = self._all_index_update_counts[0]
 self.clip_gradient = clip_gradient
 self.multi_precision = multi_precision
 self.aggregate_num = 0
@@ -380,6 +381,18 @@ class Optimizer(object):
 self.wd_mult[name] = float(attr[name]['__wd_mult__'])
 self.wd_mult.update(args_wd_mult)
 
+def _set_current_context(self, device_id):
+"""Sets the number of the currently handled device.
+
+Parameters
+--
+device_id : int
+The number of current device.
+"""
+if device_id not in self._all_index_update_counts:
+self._all_index_update_counts[device_id] = {}
+self._index_update_count = self._all_index_update_counts[device_id]
+
 def _update_count(self, index):
 """Updates num_update.
 
@@ -1623,6 +1636,8 @@ class Updater(object):
 indices = index
 grads = grad
 weights = weight
+if weights:
+self.optimizer._set_current_context(weights[0].context.device_id)
 for i, idx in enumerate(indices):
 # convert ctypes.char_p.value back to python str if needed
 if isinstance(idx, bytes):
diff --git a/tests/python/unittest/test_gluon_trainer.py 
b/tests/python/unittest/test_gluon_trainer.py
index 9f190a0..2d5874a 100644
--- a/tests/python/unittest/test_gluon_trainer.py
+++ b/tests/python/unittest/test_gluon_trainer.py
@@ -272,19 +272,22 @@ def test_trainer_lr_sched():
 lr *= factor
 mx.nd.waitall()
 
-@with_seed()
-def test_trainer_invalid_lr_sched():
+# Update on kvstore = False
 x = gluon.Parameter('x', shape=(10,))
 x.initialize(ctx=[mx.cpu(0), mx.cpu(1)], init='zeros')
 freq = 2
 factor = 0.1
 lr = 1
 lr_sched = mx.lr_scheduler.FactorScheduler(freq, factor=factor, base_lr=lr)
-invalid_trainer = gluon.Trainer([x], 'sgd', {'learning_rate': lr, 
'lr_scheduler': lr_sched},
-update_on_kvstore=False)
-with mx.autograd.record():
-for w in x.list_data():
-y = w + 1
-y.backward()
-assert_raises(ValueError, invalid_trainer.step, 1)
+trainer = gluon.Trainer([x], 'sgd', {'learning_rate': lr, 'lr_scheduler': 
lr_sched},
+update_on_kvstore=False)
+for i in range(10):
+with mx.autograd.record():
+for w in x.list_data():
+y = w + 1
+y.backward()
+trainer.step(1)
+if i % freq == 0:
+assert trainer.learning_rate == lr, (lr, trainer.learning_rate, i)
+lr *= factor
 mx.nd.waitall()



[GitHub] [incubator-mxnet] eric-haibin-lin closed issue #12713: distributed kvstore bug in MXNet

2019-03-16 Thread GitBox
eric-haibin-lin closed issue #12713: distributed kvstore bug in MXNet 
URL: https://github.com/apache/incubator-mxnet/issues/12713
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin closed issue #13752: Adam, AdaMax and FTML cannot be used with Trainer(update_on_kv=False)

2019-03-16 Thread GitBox
eric-haibin-lin closed issue #13752: Adam, AdaMax and FTML cannot be used with 
Trainer(update_on_kv=False)
URL: https://github.com/apache/incubator-mxnet/issues/13752
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #14377: Correct update count with Gluon trainer and update_on_kvstore=False

2019-03-16 Thread GitBox
eric-haibin-lin merged pull request #14377: Correct update count with Gluon 
trainer and update_on_kvstore=False
URL: https://github.com/apache/incubator-mxnet/pull/14377
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #14359: Speedup _contrib_index_copy

2019-03-16 Thread GitBox
haojin2 commented on issue #14359: Speedup _contrib_index_copy
URL: https://github.com/apache/incubator-mxnet/pull/14359#issuecomment-473620958
 
 
   @szha @zheng-da Ready for merge I think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616124
 
 
   when set_numpy_comp(True) is called, the output of some reduction operators 
Is always a scalar, which break the construct of computational graph, and not 
benefit for deployment.
   
   Example:
   ```python
   x = mx.nd.arange(6)
   x
   [0. 1. 2. 3. 4. 5.]
   
   x[1]
   [1.]
   
   
   float(x[1])
   1.0
   
   # 1-dim NDArray
   mx.nd.sum(x)
   [15.]
   
   
   # 0-dim NDArray
   mx.numpy.sum(x)
   15.0
   
   
   # built-in float
   float(mx.numpy.sum(x))
   15.0
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616124
 
 
   when set_numpy_comp(True) is called, the output of some reduction operators 
Is always a scalar, which break the construct of computational graph, and not 
benefit for deployment.
   
   Example:
   ```python
   x = mx.nd.arange(6)
   x
   [0. 1. 2. 3. 4. 5.]
   
   x[1]
   [1.]
   
   
   # 1-dim NDArray
   mx.nd.sum(x)
   [15.]
   
   
   # 0-dim NDArray
   mx.numpy.sum(x)
   15.0
   
   
   # built-in float
   float(mx.numpy.sum(x))
   15.0
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616124
 
 
   when set_numpy_comp(True) is called, the output of some reduction operators 
Is always a scalar, which break the construct of computational graph, and not 
benefit for deployment.
   
   I will write an example.
   ```python
   x = mx.nd.arange(6)
   x
   [0. 1. 2. 3. 4. 5.]
   
   x[1]
   [1.]
   
   
   # 1-dim NDArray
   mx.nd.sum(x)
   [15.]
   
   
   # 0-dim NDArray
   mx.numpy.sum(x)
   15.0
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616124
 
 
   when set_numpy_comp(True) is called, the output of some reduction operators 
Is always a scalar, which break the construct of computational graph, and not 
benefit for deployment.
   
   Example:
   ```python
   x = mx.nd.arange(6)
   x
   [0. 1. 2. 3. 4. 5.]
   
   x[1]
   [1.]
   
   
   # 1-dim NDArray
   mx.nd.sum(x)
   [15.]
   
   
   # 0-dim NDArray
   mx.numpy.sum(x)
   15.0
   
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616124
 
 
   when set_numpy_comp(True) is called, the output of some reduction operators 
Is always a scalar, which break the construct of computational graph, and not 
benefit for deployment.
   
   I will write an example.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473616035
 
 
   Sorry that I clicked by mistake.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce opened a new pull request #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
reminisce opened a new pull request #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361
 
 
   This PR is implemented based upon the discussion with @eric-haibin-lin. It 
provides two APIs and NumPy-compatible NDArray indexing behavior, i.e. `y = 
x[i]`.
   
   1. `set_numpy_comp(enable)`
   This is a thread-safe utility function for users to enable or disable 
NumPy-compatible behaviors in MXNet. This will generally affect the behavior of 
the following two operations:
   1. NDArray indexing. When `enable=False`, NDArray indexing always 
returns a tensor with `ndim >= 1`, which is the default behavior of MXNet kept 
for backward compatibility. When `enable=True`, NDArray indexing will return a 
tensor with `ndim >= 0`, which is consistent with NumPy's indexing behavior. 
The result tensor with `ndim = 0` is actually a scalar whose shape is `()`. For 
example:
   
   >>> from mxnet.base import set_numpy_comp
   >>> x = mx.nd.arange(6)
   >>> x
   [0. 1. 2. 3. 4. 5.]
   
   >>> x[1]
   [1.]
   
   >>> set_numpy_comp(enable=True)
   >>> x[1]
   1.0
   
   >>> set_numpy_comp(enable=False)
   >>> x[1]
   [1.]
   
   
   
   2. NDArray's convenience fluent methods. When `enable=True`, the 
convenience fluent methods will dispatch the calls to NumPy operators if 
implemented in MXNet. For example, given an NDArray `data`, `data.sum()` will 
call `mxnet.ndarray.sum(data)` when `enable=False` (default behavior), and 
mxnet.numpy.sum(data)` when `enable=True`.
   
   
   2. `_is_numpy_comp()`
   This is a thread-safe utility function for checking whether 
NumPy-compatibility has been enabled or disabled. This is implemented for 
developers to use. Users are not expected to call this function.
   
   @junrushao1994 @eric-haibin-lin @szha @zheng-da @yzhliu 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn closed pull request #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn closed pull request #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473615972
 
 
   when set_numpy_comp(true), the output of some reduction operators is always 
a scalsr, breaking the construct of static graph and the deployment.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451#issuecomment-473611263
 
 
   @wkcn For the two questions:
   1. Yes, each process has its independent threads. Fork only duplicates the 
caller thread, so we need to make sure all locking primitives are in valid 
states and re-create the threads in child process. The easiest way is to 
restart CustomOperator when fork happens just like Engine does.
   2. There is no fork on windows, so python use spawn method to create new 
process. I have no windows machine so I can only test on linux and mac 
with:`import multiprocessing as mp; mp.set_start_method('spawn')` It seems like 
python re-import mxnet in child process when spawning, so the bug doesn't exist 
at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451#issuecomment-473611263
 
 
   @wkcn For the two questions:
   1. Yes, each process has its independent threads. Fork only duplicates the 
caller thread, so we need to make sure all locking primitives are in valid 
states and re-create the threads in child process. The easiest way is to 
restart CustomOperator when fork happens just like Engine does.
   2. There is no fork on windows, so python use spawn method to create new 
process. I have no windows machine so I can only test on unix with:`import 
multiprocessing as mp; mp.set_start_method('spawn')` It seems like python 
re-import mxnet in child process when spawning, so the bug doesn't exist at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451#issuecomment-473611263
 
 
   @wkcn For the two questions:
   1. Yes, each process has its independent threads. Fork only duplicates the 
caller thread, so we need to make sure all locking primitives are in valid 
states and restart the threads in child process. The easiest way is to restart 
CustomOperator when fork happens just like Engine does.
   2. There is no fork on windows, so python use spawn method to create new 
process. I have no windows machine so I can only test on unix with:`import 
multiprocessing as mp; mp.set_start_method('spawn')` It seems like python 
re-import mxnet in child process when spawning, so the bug doesn't exist at all.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451#issuecomment-473611263
 
 
   @wkcn For the two questions:
   1. Yes, each process has its independent threads. Fork only duplicates the 
caller thread, so we need to make sure all locking primitives are in valid 
states and restart the threads in child process. The easiest way is to restart 
CustomOperator when fork happens just like Engine does.
   2. There is no fork on windows, so python use spawn method to create new 
process. I have no windows machine so I can only test on unix with:`import 
multiprocessing as mp; mp.set_start_method('spawn')` It seems like python 
re-import mxnet in child process when spawning, so the bug doesn't exist even 
without the fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy commented on issue #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy commented on issue #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451#issuecomment-473611263
 
 
   @wkcn For the two questions:
   1. Yes, each process has its independent threads. Fork only duplicates the 
caller thread, so we need to make sure all locking primitives are in valid 
states and restart the threads in child process. The easiest way is to restart 
CustomOperator when fork happens just like Engine does.
   2. There is no fork on windows, so python use spawn method to create new 
process. I have no windows machine so I can only test on unix with:
   ```
   import multiprocessing as mp
   mp.set_start_method('spawn')
   ```
   It seems like python re-import mxnet in child process when spawning, so the 
bug doesn't exist even without the fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 938b971  Bump the publish timestamp.
938b971 is described below

commit 938b971611f04fed67bd855376e999595b29ba2b
Author: mxnet-ci 
AuthorDate: Sun Mar 17 01:17:32 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..7bc7d4f
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sun Mar 17 01:17:32 UTC 2019



[GitHub] [incubator-mxnet] szha merged pull request #14444: fix OOM error during resource allocation

2019-03-16 Thread GitBox
szha merged pull request #1: fix OOM error during resource allocation
URL: https://github.com/apache/incubator-mxnet/pull/1
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: fix OOM error during resource allocation (#14444)

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new f602b0d  fix OOM error during resource allocation (#1)
f602b0d is described below

commit f602b0de310fbdad40b26a87e51c6820790858b3
Author: Sheng Zha 
AuthorDate: Sat Mar 16 14:24:14 2019 -0700

fix OOM error during resource allocation (#1)
---
 src/resource.cc | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/resource.cc b/src/resource.cc
index 80a5c0e..0317ff3 100644
--- a/src/resource.cc
+++ b/src/resource.cc
@@ -432,6 +432,9 @@ void Resource::get_cudnn_dropout_desc(
 // not initialized yet.
 size_t dropout_state_size;
 CUDNN_CALL(cudnnDropoutGetStatesSize(stream->dnn_handle_, 
&dropout_state_size));
+// reserve GPU space
+Storage::Get()->DirectFree(
+  Storage::Get()->Alloc(dropout_state_size, state_space->ctx));
 CUDNN_CALL(cudnnSetDropoutDescriptor(*dropout_desc, stream->dnn_handle_,
  dropout,
  
state_space->GetSpace(dropout_state_size),



[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new ba2b31e  Bump the publish timestamp.
ba2b31e is described below

commit ba2b31e91ee95b5bf73ea0ecca766a6e99ef9b4c
Author: mxnet-ci 
AuthorDate: Sat Mar 16 20:52:04 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..100533c
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Mar 16 20:52:04 UTC 2019



[GitHub] [incubator-mxnet] reminisce commented on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
reminisce commented on issue #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473591126
 
 
   @wkcn Can you elaborate why we cannot use a switch to guarantee backward 
compatibility and allow users to opt in new numpy compatible behavior?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads is created  when running custom operator, so 
custom operator needs also to be executed in main process to reproduce the bug:
   
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()  # ensure custom threads created in main process
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created  when running custom 
operator, so custom operator needs also to be executed in main process to 
reproduce the bug:
   
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()  # ensure custom threads created in main process
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created after running first custom 
operator, so custom operator needs also to be executed in main process to 
reproduce the bug:
   
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   # ensure custom threads created in main process
   foo()
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created after running first custom 
operator, so custom operator needs also to be executed in main process to 
reproduce the bug:
   
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()  # ensure custom threads created in main process
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy opened a new pull request #14451: fix custom operation in fork

2019-03-16 Thread GitBox
arcadiaphy opened a new pull request #14451: fix custom operation in fork
URL: https://github.com/apache/incubator-mxnet/pull/14451
 
 
   ## Description ##
   Fix #14396. Update pthread_atfork function to properly setup fork in custom 
operator.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created after running first custom 
operator, the new script to reproduce the bug:
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()  # ensure threads created in custom
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy edited a comment on issue #14396: mx.nd.Custom not working in 
subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created after running first custom 
operator, the new script to reproduce the bug:
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()  # ensure custom threads created in main process
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] arcadiaphy commented on issue #14396: mx.nd.Custom not working in subprocess

2019-03-16 Thread GitBox
arcadiaphy commented on issue #14396: mx.nd.Custom not working in subprocess
URL: 
https://github.com/apache/incubator-mxnet/issues/14396#issuecomment-473577236
 
 
   After #14363, the threads in custom is created after running first custom 
operator, the new script to reproduce the bug:
   ```
   from concurrent import futures
   
   import mxnet as mx
   import sys
   
   class AdditionOP(mx.operator.CustomOp):
   def __init__(self):
   super(AdditionOP, self).__init__()
   def forward(self, is_train, req, in_data, out_data, aux):
   out_data[0][:] = in_data[0] + in_data[1]
   def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
   in_grad[0][:] = out_grad[0]
   in_grad[1][:] = out_grad[0]
   
   @mx.operator.register("AdditionOP")
   class AdditionOPProp(mx.operator.CustomOpProp):
   def __init__(self):
   super(AdditionOPProp, self).__init__()
   def list_arguments(self):
   return ['a', 'b']
   def list_outputs(self):
   return ['output']
   def infer_shape(self, in_shape):
   return in_shape, [in_shape[0]]
   def create_operator(self, ctx, shapes, dtypes):
   return AdditionOP()
   
   def foo():
   a = mx.nd.array([1, 2, 3])
   b = mx.nd.array([4, 5, 6])
   
   a.attach_grad()
   b.attach_grad()
   
   print("REC")
   with mx.autograd.record():
   c = mx.nd.Custom(a, b, op_type='AdditionOP')
   
   dc = mx.nd.array([7, 8, 9])
   c.backward(dc)
   
   print('Okay :-)')
   print('a + b = c \n {} + {} = {}'.format(a.asnumpy(), b.asnumpy(), 
c.asnumpy()))
   
   def main():
   foo()
   ex = futures.ProcessPoolExecutor(1)
   r = ex.submit(foo)
   r.result()
   
   if __name__ == '__main__':
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 37f583f  Bump the publish timestamp.
37f583f is described below

commit 37f583ffce1afadb5366a445170f633ac301ea22
Author: mxnet-ci 
AuthorDate: Sat Mar 16 19:17:17 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..d9d3993
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Mar 16 19:17:17 UTC 2019



[GitHub] [incubator-mxnet] vandanavk commented on issue #14035: Fix documentation for bilinear upsampling and add unit test

2019-03-16 Thread GitBox
vandanavk commented on issue #14035: Fix documentation for bilinear upsampling 
and add unit test
URL: https://github.com/apache/incubator-mxnet/pull/14035#issuecomment-473574788
 
 
   @mxnet-label-bot update [Operator, pr-awaiting-review]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] seujung commented on a change in pull request #13735: update wavenet codes

2019-03-16 Thread GitBox
seujung commented on a change in pull request #13735: update wavenet codes
URL: https://github.com/apache/incubator-mxnet/pull/13735#discussion_r265847269
 
 

 ##
 File path: example/gluon/wavenet/models.py
 ##
 @@ -0,0 +1,118 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Module: WaveNet network modulep
+"""
+from mxnet import nd
+from mxnet.gluon import nn
+import mxnet.ndarray as F
+# pylint: disable=invalid-name, too-many-arguments, arguments-differ, 
attribute-defined-outside-init, too-many-instance-attributes, 
invalid-sequence-index, no-self-use
+class One_Hot(nn.Block):
+"""
+Description : generate one hot result
+"""
+def __init__(self, depth):
+super(One_Hot, self).__init__()
+self.depth = depth
+
+def forward(self, X_in):
+with X_in.context:
+X_in = X_in
+self.ones = nd.one_hot(nd.arange(self.depth), self.depth)
+return self.ones[X_in, :]
+
+def __repr__(self):
+return self.__class__.__name__ + "({})".format(self.depth)
+
+class WaveNet(nn.Block):
 
 Review comment:
   update hybrid module for wavenet model 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] seujung commented on a change in pull request #13735: update wavenet codes

2019-03-16 Thread GitBox
seujung commented on a change in pull request #13735: update wavenet codes
URL: https://github.com/apache/incubator-mxnet/pull/13735#discussion_r265847269
 
 

 ##
 File path: example/gluon/wavenet/models.py
 ##
 @@ -0,0 +1,118 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Module: WaveNet network modulep
+"""
+from mxnet import nd
+from mxnet.gluon import nn
+import mxnet.ndarray as F
+# pylint: disable=invalid-name, too-many-arguments, arguments-differ, 
attribute-defined-outside-init, too-many-instance-attributes, 
invalid-sequence-index, no-self-use
+class One_Hot(nn.Block):
+"""
+Description : generate one hot result
+"""
+def __init__(self, depth):
+super(One_Hot, self).__init__()
+self.depth = depth
+
+def forward(self, X_in):
+with X_in.context:
+X_in = X_in
+self.ones = nd.one_hot(nd.arange(self.depth), self.depth)
+return self.ones[X_in, :]
+
+def __repr__(self):
+return self.__class__.__name__ + "({})".format(self.depth)
+
+class WaveNet(nn.Block):
 
 Review comment:
   I try to change hybridBlock.  But this code occurs error. 
   code : output = F.multiply(F.sigmond(output_sigmoid), F.tanh(output_tanh))
   error : Argument data must have NDArray type, but got Symbol
   Is there any way to solve this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] seujung commented on a change in pull request #13735: update wavenet codes

2019-03-16 Thread GitBox
seujung commented on a change in pull request #13735: update wavenet codes
URL: https://github.com/apache/incubator-mxnet/pull/13735#discussion_r265847269
 
 

 ##
 File path: example/gluon/wavenet/models.py
 ##
 @@ -0,0 +1,118 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Module: WaveNet network modulep
+"""
+from mxnet import nd
+from mxnet.gluon import nn
+import mxnet.ndarray as F
+# pylint: disable=invalid-name, too-many-arguments, arguments-differ, 
attribute-defined-outside-init, too-many-instance-attributes, 
invalid-sequence-index, no-self-use
+class One_Hot(nn.Block):
+"""
+Description : generate one hot result
+"""
+def __init__(self, depth):
+super(One_Hot, self).__init__()
+self.depth = depth
+
+def forward(self, X_in):
+with X_in.context:
+X_in = X_in
+self.ones = nd.one_hot(nd.arange(self.depth), self.depth)
+return self.ones[X_in, :]
+
+def __repr__(self):
+return self.__class__.__name__ + "({})".format(self.depth)
+
+class WaveNet(nn.Block):
 
 Review comment:
   I try to change hybridBlock. #764fa84241344c0ea2721f14a2d81e878150b46e. But 
this code occurs error. 
   code : output = F.multiply(F.sigmond(output_sigmoid), F.tanh(output_tanh))
   error : Argument data must have NDArray type, but got Symbol
   Is there any way to solve this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] seujung commented on a change in pull request #13735: update wavenet codes

2019-03-16 Thread GitBox
seujung commented on a change in pull request #13735: update wavenet codes
URL: https://github.com/apache/incubator-mxnet/pull/13735#discussion_r265847269
 
 

 ##
 File path: example/gluon/wavenet/models.py
 ##
 @@ -0,0 +1,118 @@
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""
+Module: WaveNet network modulep
+"""
+from mxnet import nd
+from mxnet.gluon import nn
+import mxnet.ndarray as F
+# pylint: disable=invalid-name, too-many-arguments, arguments-differ, 
attribute-defined-outside-init, too-many-instance-attributes, 
invalid-sequence-index, no-self-use
+class One_Hot(nn.Block):
+"""
+Description : generate one hot result
+"""
+def __init__(self, depth):
+super(One_Hot, self).__init__()
+self.depth = depth
+
+def forward(self, X_in):
+with X_in.context:
+X_in = X_in
+self.ones = nd.one_hot(nd.arange(self.depth), self.depth)
+return self.ones[X_in, :]
+
+def __repr__(self):
+return self.__class__.__name__ + "({})".format(self.depth)
+
+class WaveNet(nn.Block):
 
 Review comment:
   I try to change hybridBlock. ##13735. But this code occurs error. 
   code : output = F.multiply(F.sigmond(output_sigmoid), F.tanh(output_tanh))
   error : Argument data must have NDArray type, but got Symbol
   Is there any way to solve this problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Bleach665 commented on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
Bleach665 commented on issue #14378: Win10 build error: gtest, no 'object' file 
generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473545561
 
 
   I did not make any changes to the code. As for Cmake, I do not remember 
exactly what changes I made to the configuration, but for sure nothing is 
critically important. Probably all of these issues were already fixed below or 
it was depend on VS version (my 2017, you as I understand 2015).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss commented on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
stereomatchingkiss commented on issue #14378: Win10 build error: gtest, no 
'object' file generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473538922
 
 
   > but I successfully build mxnet with 
[ed83071](https://github.com/apache/incubator-mxnet/commit/ed8307121ecb7d7f0717ccd080848f5b16dcf191)
   
   It is a good news if you can build it on windows. How do you solve the 
issues of #13958 and #14343? I can build mxnet with mkl too, problems is it 
always throw exception at runtime if you do the inference task by cpu.
   
   ed83071 is a bit newer than 1.4.0, wonder if they fix the build issues on 
windows already?I am still waiting next stable release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss edited a comment on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
stereomatchingkiss edited a comment on issue #14378: Win10 build error: gtest, 
no 'object' file generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473538922
 
 
   > but I successfully build mxnet with 
[ed83071](https://github.com/apache/incubator-mxnet/commit/ed8307121ecb7d7f0717ccd080848f5b16dcf191)
   
   It is a good news if you can build it on windows. How do you solve the 
issues of #13958 and #14343? I can build mxnet(1.3.1) with mkl too, problems is 
it always throw exception at runtime if you do the inference task by cpu(by gpu 
is fine).
   
   ed83071 is a bit newer than 1.4.0, wonder if they fix the build issues on 
windows already?I am still waiting next stable release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss edited a comment on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
stereomatchingkiss edited a comment on issue #14378: Win10 build error: gtest, 
no 'object' file generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473538922
 
 
   > but I successfully build mxnet with 
[ed83071](https://github.com/apache/incubator-mxnet/commit/ed8307121ecb7d7f0717ccd080848f5b16dcf191)
   
   It is a good news if you can build it on windows. How do you solve the 
issues of #13958 and #14343? I can build mxnet with mkl too, problems is it 
always throw exception at runtime if you do the inference task by cpu(by gpu is 
fine).
   
   ed83071 is a bit newer than 1.4.0, wonder if they fix the build issues on 
windows already?I am still waiting next stable release.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Bleach665 edited a comment on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
Bleach665 edited a comment on issue #14378: Win10 build error: gtest, no 
'object' file generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473535906
 
 
   @stereomatchingkiss , but I successfully build mxnet with 
ed8307121ecb7d7f0717ccd080848f5b16dcf191 rev. With mkl, mkl blas, cuda, cudnn, 
opencv... However, I have not used it yet. Is it possible that the bugs will 
appear when it will be used? 
   And thanks for the great guide.
   
   PS. Gtest I skip during the build.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Bleach665 commented on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
Bleach665 commented on issue #14378: Win10 build error: gtest, no 'object' file 
generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473535906
 
 
   @stereomatchingkiss , but I successfully build mxnet with 
ed8307121ecb7d7f0717ccd080848f5b16dcf191 rev. With mkl, mkl blas, cuda, cudnn, 
opencv... However, I have not used it yet. Is it possible that the bugs will 
appear when it will be used? And thanks for the great guide.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stereomatchingkiss commented on issue #14378: Win10 build error: gtest, no 'object' file generated

2019-03-16 Thread GitBox
stereomatchingkiss commented on issue #14378: Win10 build error: gtest, no 
'object' file generated
URL: 
https://github.com/apache/incubator-mxnet/issues/14378#issuecomment-473531466
 
 
   > 
   > 
   > Yes, --recursive flag was used.
   > And just now I did reproduce this bug on another computer with same 
software.
   
   Don't try to build mxnet1.4.0, it is broken, I suggest you build mxnet1.3.1, 
it is still broken on windows 10, but unless with a lot of efforts, you can 
build it.
   
   Please check this [blog 
](http://qtandopencv.blogspot.com/2019/03/build-mxnet-131-on-windows.html)if 
you want to know how to build mxnet1.3.1 on windows.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new e3f0727  Bump the publish timestamp.
e3f0727 is described below

commit e3f07274a70452573e245d7fa0392382bcdcfdc4
Author: mxnet-ci 
AuthorDate: Sat Mar 16 13:18:04 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..77537c8
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Mar 16 13:18:04 UTC 2019



[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #13668: Update MKL-DNN to v0.18 release (was: fix the Dense layer issue)

2019-03-16 Thread GitBox
pengzhao-intel commented on issue #13668: Update MKL-DNN to v0.18 release (was: 
fix the Dense layer issue)
URL: https://github.com/apache/incubator-mxnet/pull/13668#issuecomment-473521118
 
 
   It's great that all CI passed. 
   @wkcn @szha please help confirm all concerns are resolved.
   
   We will merge the PR in 24 hours in case there's no further concern.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] chinakook commented on issue #14443: Mxnet allclose

2019-03-16 Thread GitBox
chinakook commented on issue #14443: Mxnet allclose
URL: https://github.com/apache/incubator-mxnet/pull/14443#issuecomment-473520075
 
 
   It this op only support float32?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #14277: Enhance PartitionGraph

2019-03-16 Thread GitBox
ZhennanQin commented on a change in pull request #14277: Enhance PartitionGraph
URL: https://github.com/apache/incubator-mxnet/pull/14277#discussion_r266196192
 
 

 ##
 File path: src/operator/subgraph/subgraph_property.h
 ##
 @@ -200,7 +197,7 @@ typedef 
dmlc::ThreadLocalStore__REGISTER_OR_GET__(#Name, 
&SubgraphPropertyType::Create)
+SubgraphPropertyRegistry::Get()->__REGISTER__(#Name, 
&SubgraphPropertyType::Create)
 
 Review comment:
   @anirudh2290 Putting MXNET_REGISTER_SUBGRAPH_PROPERTY in header file is not 
a good usecase. If developer include that header twice, then I think he just 
wants to register that properties twice for same backend. I admited this may 
cause misleading if developer doesn't know the details behind 
MXNET_REGISTER_SUBGRAPH_PROPERTY . But I have no idea how to avoid this. Do you 
have any good suggestion? Please consider the requirement I listed in previous 
comment.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn edited a comment on issue #14361: [numpy] Add a global switch to turn 
on/off numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473513487
 
 
   In my opinion, we can not use the switch. Instead, We can add a typecast 
from an 1-size NDArray to a scalar. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off numpy compatibility

2019-03-16 Thread GitBox
wkcn commented on issue #14361: [numpy] Add a global switch to turn on/off 
numpy compatibility
URL: https://github.com/apache/incubator-mxnet/pull/14361#issuecomment-473513487
 
 
   In my opinion, we can not use the switch. Instead, We can add a typecast 
from 0-shape NDArray to a scalar. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #14286: Add examples of running MXNet with Horovod

2019-03-16 Thread GitBox
wuxun-zhang commented on issue #14286: Add examples of running MXNet with 
Horovod
URL: https://github.com/apache/incubator-mxnet/pull/14286#issuecomment-473513431
 
 
   @apeforest  There are no problem when building Horovod from source. Just 
want to verify if Horovod PyPi package can also work well. 
   
   @yuxihu I have tried the lastest MXNet with this 
[commit](https://github.com/apache/incubator-mxnet/commit/226212b40b5b1a43a3d91d3a810541887beaae8c).
 When I `import horovod.mxnet as hvd`, still got the error `undefined symbol`.  
Did you run this example successfully on CPU? If so, can you tell me what's 
your building command for MXNet without mkldnn? Thanks in advance. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-03-16 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new aaa0c9e  Bump the publish timestamp.
aaa0c9e is described below

commit aaa0c9e2eaf945afaef9ba097792d27d8252c852
Author: mxnet-ci 
AuthorDate: Sat Mar 16 07:22:33 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..059b908
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Sat Mar 16 07:22:33 UTC 2019