[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349462869 ## File path: tests/nightly/test_np_random.py ## @@ -0,0 +1,83 @@ +# Licensed to the Apache Software F

[GitHub] [incubator-mxnet] kshitij12345 commented on issue #15331: [fix] missing input log higher order.

2019-11-21 Thread GitBox
kshitij12345 commented on issue #15331: [fix] missing input log higher order. URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-557421604 @apeforest Sure no worries. Thanks. This is an automated message fr

[GitHub] [incubator-mxnet] sxjscience commented on issue #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
sxjscience commented on issue #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#issuecomment-557421312 LGTM This is an automated message from the Apache Git Service. To res

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
sxjscience commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349458918 ## File path: tests/nightly/test_np_random.py ## @@ -0,0 +1,83 @@ +# Licensed to the Apache Softwa

[GitHub] [incubator-mxnet] liuzh91 opened a new pull request #16888: Add evaluation_loss to the estimator base class.

2019-11-21 Thread GitBox
liuzh91 opened a new pull request #16888: Add evaluation_loss to the estimator base class. URL: https://github.com/apache/incubator-mxnet/pull/16888 ## Description ## [Bug_fix] Add evaluation loss member in estimator class. The purpose of add the evaluation loss is to decouple the train

[GitHub] [incubator-mxnet] sxjscience opened a new issue #16887: [Numpy] Bug of basic indexing

2019-11-21 Thread GitBox
sxjscience opened a new issue #16887: [Numpy] Bug of basic indexing URL: https://github.com/apache/incubator-mxnet/issues/16887 Found this bug when writing random test cases for symbolic indexing. ```python import mxnet as mx from mxnet import gluon mx.npx.set_np() a =

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349456580 ## File path: tests/python/unittest/test_numpy_op.py ## @@ -2669,6 +2669,45 @@ def hybrid_forward(self

[GitHub] [incubator-mxnet] reminisce commented on issue #16824: Enable unit tests for TVM ops for all cuda compute capabilities

2019-11-21 Thread GitBox
reminisce commented on issue #16824: Enable unit tests for TVM ops for all cuda compute capabilities URL: https://github.com/apache/incubator-mxnet/pull/16824#issuecomment-557414420 @ptrendx This does not affect 1.6. We plan not to release TVM powered operators in 1.6. If you see the inval

[GitHub] [incubator-mxnet] adis300 commented on issue #15303: Fix amalgamation failure.

2019-11-21 Thread GitBox
adis300 commented on issue #15303: Fix amalgamation failure. URL: https://github.com/apache/incubator-mxnet/pull/15303#issuecomment-557404304 @marcoabreu @TaoLv I have just rebased the feature onto the latest master branch and resolved related conflicts. ---

[GitHub] [incubator-mxnet] haojin2 opened a new pull request #16886: [DO NOT MERGE] [DO NOT REVIEW] boolean_mask_assign with start_axis

2019-11-21 Thread GitBox
haojin2 opened a new pull request #16886: [DO NOT MERGE] [DO NOT REVIEW] boolean_mask_assign with start_axis URL: https://github.com/apache/incubator-mxnet/pull/16886 ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### Please fee

[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16878: add micro to pearsonr

2019-11-21 Thread GitBox
leezu commented on a change in pull request #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349432215 ## File path: python/mxnet/metric.py ## @@ -1438,13 +1449,46 @@ class PearsonCorrelation(EvalMetric): >>> pr = mx.m

[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16878: add micro to pearsonr

2019-11-21 Thread GitBox
leezu commented on a change in pull request #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349432428 ## File path: python/mxnet/metric.py ## @@ -1457,16 +1501,37 @@ def update(self, labels, preds): Predicted

[GitHub] [incubator-mxnet] zburning commented on issue #16878: add micro to pearsonr

2019-11-21 Thread GitBox
zburning commented on issue #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878#issuecomment-557382220 Actually, I also test the run time performance locally. But the current test_metric_perf.py doesn't test micro performance. Do you think it's necessary

[GitHub] [incubator-mxnet] jeremiedb edited a comment on issue #15994: ONNX import/export: Upsampling

2019-11-21 Thread GitBox
jeremiedb edited a comment on issue #15994: ONNX import/export: Upsampling URL: https://github.com/apache/incubator-mxnet/pull/15994#issuecomment-557378384 Any development? Also facing an Upsampling issue trying to import: https://github.com/onnx/models/blob/master/vision/style_transfer

[GitHub] [incubator-mxnet] jeremiedb commented on issue #15994: ONNX import/export: Upsampling

2019-11-21 Thread GitBox
jeremiedb commented on issue #15994: ONNX import/export: Upsampling URL: https://github.com/apache/incubator-mxnet/pull/15994#issuecomment-557378384 Any development? Also facing an Upsampling issue trying to import: https://github.com/onnx/models/blob/master/vision/style_transfer/fast_n

[GitHub] [incubator-mxnet] szha commented on issue #16864: [Discussion] 1.7.0 Roadmap

2019-11-21 Thread GitBox
szha commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557372476 I was referring the the instructions just below the lines you were quoting: > If you have any item that you'd like to propose to have in the r

[incubator-mxnet] branch v1.6.x updated (200f0ec -> e73c186)

2019-11-21 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository. ptrendx pushed a change to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 200f0ec [v1.6.x] Backport #16837 into v1.6.x (#16847) add e73c186 Backport #16798, #16836 and #16838

[GitHub] [incubator-mxnet] ptrendx commented on issue #16872: Backport #16856 to 1.6

2019-11-21 Thread GitBox
ptrendx commented on issue #16872: Backport #16856 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16872#issuecomment-557372096 @stu1130 please rebase this on top of current 1.6.x branch, which has the necessary commit. ---

[GitHub] [incubator-mxnet] cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap

2019-11-21 Thread GitBox
cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557372008 > @cjolivier01 @pengzhao-intel @ptrendx would you mind opening a feature request issue as suggested by the initial post? The roadmap issue is

[incubator-mxnet] branch v1.6.x updated (200f0ec -> e73c186)

2019-11-21 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository. ptrendx pushed a change to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 200f0ec [v1.6.x] Backport #16837 into v1.6.x (#16847) add e73c186 Backport #16798, #16836 and #16838

[GitHub] [incubator-mxnet] ptrendx merged pull request #16874: Backport #16798, #16836 and #16838 to 1.6

2019-11-21 Thread GitBox
ptrendx merged pull request #16874: Backport #16798, #16836 and #16838 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16874 This is an automated message from the Apache Git Service. To respond to the message, plea

[GitHub] [incubator-mxnet] xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
xidulu commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349417232 ## File path: tests/python/unittest/test_numpy_op.py ## @@ -2669,6 +2669,45 @@ def hybrid_forward(self

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162 ## File path: src/operator/tensor/elemwise_unary_op.

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162 ## File path: src/operator/tensor/elemwise_unary_op.

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162 ## File path: src/operator/tensor/elemwise_unary_op.

[GitHub] [incubator-mxnet] access2rohit opened a new pull request #16885: [WIP]Multi Precision Lamb Update operator

2019-11-21 Thread GitBox
access2rohit opened a new pull request #16885: [WIP]Multi Precision Lamb Update operator URL: https://github.com/apache/incubator-mxnet/pull/16885 ## Description ## adding to new operators: - mp_lamb_update_pahse1 - mp_lamb_update_pahse1 Link to paper: https://arxiv.

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162 ## File path: src/operator/tensor/elemwise_unary_op.

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162 ## File path: src/operator/tensor/elemwise_unary_op.

[GitHub] [incubator-mxnet] wkcn opened a new pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-11-21 Thread GitBox
wkcn opened a new pull request #16884: [Backport][v1.6.x] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan URL: https://github.com/apache/incubator-mxnet/pull/16884 Hi, there. In v1.6.x, there is a bug of reduce operators when the inputs contain inf and n

[GitHub] [incubator-mxnet] Tommliu commented on a change in pull request #16862: Op Unravel_index PR [Numpy]

2019-11-21 Thread GitBox
Tommliu commented on a change in pull request #16862: Op Unravel_index PR [Numpy] URL: https://github.com/apache/incubator-mxnet/pull/16862#discussion_r349409841 ## File path: python/mxnet/numpy/multiarray.py ## @@ -57,7 +57,7 @@ 'blackman', 'flip', 'around', '

[GitHub] [incubator-mxnet] szha commented on issue #16864: [Discussion] 1.7.0 Roadmap

2019-11-21 Thread GitBox
szha commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557359548 @cjolivier01 @pengzhao-intel @ptrendx would you mind opening a feature request issue as suggested by the initial post? The roadmap issue is usually

[GitHub] [incubator-mxnet] sxjscience commented on issue #16880: Better to flatten the label array in metric.F1()

2019-11-21 Thread GitBox
sxjscience commented on issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557352302 @zburning I think the guideline for the refactoring is to try to follow the convention of scikit_learn. --

[GitHub] [incubator-mxnet] zburning commented on issue #16880: Better to flatten the label array in metric.F1()

2019-11-21 Thread GitBox
zburning commented on issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557351933 Thank you for explanation! So the current implementation in metric.F1() is not good because it only support binary cla

[GitHub] [incubator-mxnet] sxjscience opened a new pull request #16883: Add arange_like to npx

2019-11-21 Thread GitBox
sxjscience opened a new pull request #16883: Add arange_like to npx URL: https://github.com/apache/incubator-mxnet/pull/16883 ## Description ## Move arange_like to npx to support the numpy example of Transformer ## Checklist ## ### Essentials ### Please feel free to remove in

[incubator-mxnet] branch v1.6.x updated (33a3af9 -> 200f0ec)

2019-11-21 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 33a3af9 Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4 (#16835) add 200f0ec [v1.6

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression

2019-11-21 Thread GitBox
pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression URL: https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557349460 cc @TaoLv This is an automated message from the Apache Git Servi

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression

2019-11-21 Thread GitBox
pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression URL: https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557349398 @rongzha1 please try to run the script and verify the CPU performance. --

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16847: [v1.6.x] Backport #16837 into v1.6.x

2019-11-21 Thread GitBox
pengzhao-intel commented on issue #16847: [v1.6.x] Backport #16837 into v1.6.x URL: https://github.com/apache/incubator-mxnet/pull/16847#issuecomment-557349164 Merging now. thanks @ptrendx This is an automated message from th

[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16847: [v1.6.x] Backport #16837 into v1.6.x

2019-11-21 Thread GitBox
pengzhao-intel merged pull request #16847: [v1.6.x] Backport #16837 into v1.6.x URL: https://github.com/apache/incubator-mxnet/pull/16847 This is an automated message from the Apache Git Service. To respond to the message, pl

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-21 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new aefff7f Bump the publis

[GitHub] [incubator-mxnet] ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6

2019-11-21 Thread GitBox
ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557335295 Ok, so @haojin2 please make a PR to 1.6.x branch with both #16827 and #16791. ---

[GitHub] [incubator-mxnet] wkcn edited a comment on issue #16881: Add TypeFlag=>string macro

2019-11-21 Thread GitBox
wkcn edited a comment on issue #16881: Add TypeFlag=>string macro URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-557333251 I prefer to add the type name in DataType class, and get the type name from ‘mshadow::DataType\::kName’. https://github.com/apache/incubator

[GitHub] [incubator-mxnet] wkcn commented on issue #16881: Add TypeFlag=>string macro

2019-11-21 Thread GitBox
wkcn commented on issue #16881: Add TypeFlag=>string macro URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-557333251 I prefer to add the type name in DataType class. https://github.com/apache/incubator-mxnet/blob/master/3rdparty/mshadow/mshadow/base.h#L321 ---

[GitHub] [incubator-mxnet] wkcn commented on a change in pull request #16881: Add TypeFlag=>string macro

2019-11-21 Thread GitBox
wkcn commented on a change in pull request #16881: Add TypeFlag=>string macro URL: https://github.com/apache/incubator-mxnet/pull/16881#discussion_r349383907 ## File path: include/mxnet/base.h ## @@ -85,6 +85,18 @@ */ #define PROFILER_MESSAGE_FUNCNAME (__FUNCTION__) +/

[GitHub] [incubator-mxnet] haojin2 commented on issue #16827: Refactor NumPy-compatible elemwise broadcast operators

2019-11-21 Thread GitBox
haojin2 commented on issue #16827: Refactor NumPy-compatible elemwise broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16827#issuecomment-557325172 @ptrendx This is an automated message from the Apache

[GitHub] [incubator-mxnet] haojin2 commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6

2019-11-21 Thread GitBox
haojin2 commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557325048 @ptrendx There's a separate PR #16827 that is needed to fix such issues. #16827 made a major refactor to the np_elemwise_binary_broad

[GitHub] [incubator-mxnet] ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6

2019-11-21 Thread GitBox
ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557324624 Due to problems with compilation on Windows I removed #16791 from this bulk of cherry-picks. @haojin2 Please make a separate PR to br

[incubator-mxnet] branch v1.6.x updated: Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4 (#16835)

2019-11-21 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository. ptrendx pushed a commit to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.6.x by this push: new 33a3af9 Fix test_gluon.py:test_sync_b

[GitHub] [incubator-mxnet] ptrendx merged pull request #16835: Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4

2019-11-21 Thread GitBox
ptrendx merged pull request #16835: Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4 URL: https://github.com/apache/incubator-mxnet/pull/16835 This is an automated message from the Apache Git Service. To respond

[GitHub] [incubator-mxnet] ptrendx commented on issue #16824: Enable unit tests for TVM ops for all cuda compute capabilities

2019-11-21 Thread GitBox
ptrendx commented on issue #16824: Enable unit tests for TVM ops for all cuda compute capabilities URL: https://github.com/apache/incubator-mxnet/pull/16824#issuecomment-557322996 This affects 1.6, right? I encountered similar errors (`CUDA_ERROR_INVALID_PTX`) in testing of my unrelated PR

[GitHub] [incubator-mxnet] larroy commented on issue #16835: Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4

2019-11-21 Thread GitBox
larroy commented on issue #16835: Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4 URL: https://github.com/apache/incubator-mxnet/pull/16835#issuecomment-557322878 @ptrendx This is an automated message from the

[GitHub] [incubator-mxnet] ptrendx commented on issue #16796: Add support for boolean inputs to FusedOp

2019-11-21 Thread GitBox
ptrendx commented on issue #16796: Add support for boolean inputs to FusedOp URL: https://github.com/apache/incubator-mxnet/pull/16796#issuecomment-557312888 @marcoabreu @larroy Could you tell me what is the configuration of the unix-gpu test runners? They make TVM error out and I cannot re

[GitHub] [incubator-mxnet] larroy commented on issue #16753: fail to build using docker

2019-11-21 Thread GitBox
larroy commented on issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753#issuecomment-557293733 I built the latest from master without any problems, did you update submodules? ``` time ci/build.py -p armv7 ... 2019-11-21 2

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer update

2019-11-21 Thread GitBox
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer update URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r349305905 ## File path: python/mxnet/optimizer/optimizer.py ## @@ -1244,6 +1244,54 @@ def update(self, index, weight, g

[GitHub] [incubator-mxnet] zeeshansayyed commented on issue #16882: Gradient clipping across multiple GPUs

2019-11-21 Thread GitBox
zeeshansayyed commented on issue #16882: Gradient clipping across multiple GPUs URL: https://github.com/apache/incubator-mxnet/issues/16882#issuecomment-557254618 The example which I found was using `gluonnlp.utils.clip_grad_global_norm` as follows: ```python trainer.allreduce_gr

[GitHub] [incubator-mxnet] marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'.

2019-11-21 Thread GitBox
marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'. URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557249348 It seems like I was under the impression that we are dropping support of some visual studio version

[GitHub] [incubator-mxnet] ptrendx commented on issue #16872: Backport #16856 to 1.6

2019-11-21 Thread GitBox
ptrendx commented on issue #16872: Backport #16856 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16872#issuecomment-557246193 This PR relies on #16847 This is an automated message from the Apache Git Service. To

[GitHub] [incubator-mxnet] ptrendx commented on issue #16874: Backport #16798, #16791 and #16838 to 1.6

2019-11-21 Thread GitBox
ptrendx commented on issue #16874: Backport #16798, #16791 and #16838 to 1.6 URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557243763 @haojin2 Windows build failed with `fatal error C1002: compiler is out of heap space in pass 2` - did you do anything in the other PR

[GitHub] [incubator-mxnet] zeeshansayyed opened a new issue #16882: Gradient clipping across multiple GPUs

2019-11-21 Thread GitBox
zeeshansayyed opened a new issue #16882: Gradient clipping across multiple GPUs URL: https://github.com/apache/incubator-mxnet/issues/16882 Hello, Can someone please point me to an example where gradient clipping can be performed on multiple GPUs. Thanks Zeeshan --

[incubator-mxnet] branch master updated (ece027c -> 4da14a2)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from ece027c add numpy op diagflat [numpy] (#16813) add 4da14a2 add op bitwise_or [numpy] (#16801) No new r

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n

2019-11-21 Thread GitBox
sxjscience commented on a change in pull request #16876: [Numpy] Implementation npx.{sample}_n URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349255827 ## File path: tests/python/unittest/test_numpy_op.py ## @@ -2669,6 +2669,45 @@ def hybrid_forward(

[incubator-mxnet] branch master updated (ece027c -> 4da14a2)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from ece027c add numpy op diagflat [numpy] (#16813) add 4da14a2 add op bitwise_or [numpy] (#16801) No new r

[incubator-mxnet] branch master updated (a8b31a2 -> ece027c)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all nodes in a graph (#16836) add ece027

[incubator-mxnet] branch master updated (ece027c -> 4da14a2)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from ece027c add numpy op diagflat [numpy] (#16813) add 4da14a2 add op bitwise_or [numpy] (#16801) No new r

[incubator-mxnet] branch master updated (a8b31a2 -> ece027c)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all nodes in a graph (#16836) add ece027

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-21 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new b63420c Bump the publis

[GitHub] [incubator-mxnet] haojin2 merged pull request #16801: add op bitwise_or [numpy]

2019-11-21 Thread GitBox
haojin2 merged pull request #16801: add op bitwise_or [numpy] URL: https://github.com/apache/incubator-mxnet/pull/16801 This is an automated message from the Apache Git Service. To respond to the message, please log on to Git

[incubator-mxnet] branch master updated (a8b31a2 -> ece027c)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all nodes in a graph (#16836) add ece027

[GitHub] [incubator-mxnet] haojin2 merged pull request #16813: add numpy op diagflat [numpy]

2019-11-21 Thread GitBox
haojin2 merged pull request #16813: add numpy op diagflat [numpy] URL: https://github.com/apache/incubator-mxnet/pull/16813 This is an automated message from the Apache Git Service. To respond to the message, please log on to

[GitHub] [incubator-mxnet] DickJC123 commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'.

2019-11-21 Thread GitBox
DickJC123 commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'. URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557216520 @marcoabreu Sounds like if I resubmitted the core of this PR, you'd support it. Anything specific b

[GitHub] [incubator-mxnet] sxjscience commented on issue #16878: add micro to pearsonr

2019-11-21 Thread GitBox
sxjscience commented on issue #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878#issuecomment-557215468 Nice add! Could you also add the test here? https://github.com/apache/incubator-mxnet/blob/a8b31a239f5d5ed0ebff0f3be44b5e5534e0b3f5/tests/python/unitt

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16878: add micro to pearsonr

2019-11-21 Thread GitBox
sxjscience commented on a change in pull request #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349245753 ## File path: python/mxnet/metric.py ## @@ -1438,13 +1449,46 @@ class PearsonCorrelation(EvalMetric): >>> pr =

[GitHub] [incubator-mxnet] DickJC123 commented on issue #16831: [CI] Python2: CPU - hangs after test_create_np_param

2019-11-21 Thread GitBox
DickJC123 commented on issue #16831: [CI] Python2: CPU - hangs after test_create_np_param URL: https://github.com/apache/incubator-mxnet/issues/16831#issuecomment-557210749 I was under the impression that when a PR goes through CI, the code tested is a merge of the PR with the then-curren

[GitHub] [incubator-mxnet] sxjscience commented on issue #16880: Better to flatten the label array in metric.F1()

2019-11-21 Thread GitBox
sxjscience commented on issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557207181 We will have label shape = (B, N_labels) in multi-label classification problems, e.g., the PPI dataset used in Graph Ne

[GitHub] [incubator-mxnet] cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap

2019-11-21 Thread GitBox
cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557204872 > XLA is effectively dead at this point so I'm not sure why we would want to invest in that. MLIR is not really ready for prime time. Out of

[GitHub] [incubator-mxnet] cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)

2019-11-21 Thread GitBox
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557201298 Actually, i don't know what this issue is about. There's no actual report of a problem in the description. ---

[GitHub] [incubator-mxnet] cjolivier01 removed a comment on issue #11417: libomp.so dependency (need REAL fix)

2019-11-21 Thread GitBox
cjolivier01 removed a comment on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196823 btw: ``` [chriso@chriso-dev:/opt/python3.6b]ldd ./lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_i

[GitHub] [incubator-mxnet] cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)

2019-11-21 Thread GitBox
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196823 btw: ``` [chriso@chriso-dev:/opt/python3.6b]ldd ./lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.s

[GitHub] [incubator-mxnet] cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)

2019-11-21 Thread GitBox
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196011 I always see this junk in there as well, but doesn't necesarily mean it'll llnk it: ```cmake OpenMP_CXX_LIB_NAMES:STRING=gomp

[GitHub] [incubator-mxnet] ptrendx commented on issue #16864: [Discussion] 1.7.0 Roadmap

2019-11-21 Thread GitBox
ptrendx commented on issue #16864: [Discussion] 1.7.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557193983 XLA is effectively dead at this point so I'm not sure why we would want to invest in that. MLIR is not really ready for prime time. Out of all of

[GitHub] [incubator-mxnet] marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'.

2019-11-21 Thread GitBox
marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit toolchain to fix 'out of heap space'. URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557171434 Happy to move forward with the upgrade to 64bit ---

[GitHub] [incubator-mxnet] jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression

2019-11-21 Thread GitBox
jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression URL: https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557149820 I have also uploaded the scripts to: [Here](https://github.com/jonatan1626/mxnet-performance-benchmark/tree/master). Do let me know

[GitHub] [incubator-mxnet] jonatan1626 edited a comment on issue #16845: MXNet 1.6.0 performance regression

2019-11-21 Thread GitBox
jonatan1626 edited a comment on issue #16845: MXNet 1.6.0 performance regression URL: https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557144952 @pengzhao-intel The runs just finished there was an error when running resnet50_v1, so I have restarted the job and will post

[GitHub] [incubator-mxnet] jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression

2019-11-21 Thread GitBox
jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression URL: https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557144952 @pengzhao-intel The runs just finished there was an error when running resnet50_v1, so I have restarted the job and will post the res

[GitHub] [incubator-mxnet] Kh4L opened a new pull request #16881: Add TypeFlag=>string macro

2019-11-21 Thread GitBox
Kh4L opened a new pull request #16881: Add TypeFlag=>string macro URL: https://github.com/apache/incubator-mxnet/pull/16881 ## Description ## Add a macro mapping mshadow type_flag to strings, to improve debuggability. ## Checklist ## ### Essentials ### Please feel free to re

[GitHub] [incubator-mxnet] zburning opened a new issue #16880: Better to flatten the label array in metric.F1()

2019-11-21 Thread GitBox
zburning opened a new issue #16880: Better to flatten the label array in metric.F1() URL: https://github.com/apache/incubator-mxnet/issues/16880 ## Description Unlike the other metrics, the current metric.F1() doesn't flatten the label. Commonly the label would have the shape of (ba

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-21 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new b898a0f Bump the publis

[GitHub] [incubator-mxnet] haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d

2019-11-21 Thread GitBox
haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d URL: https://github.com/apache/incubator-mxnet/issues/16770#issuecomment-557016446 @ptrendx @DickJC123 This is happening quite often for TensorRT tests, can you guys probably take a look? I believe it could also be

[GitHub] [incubator-mxnet] haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d

2019-11-21 Thread GitBox
haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d URL: https://github.com/apache/incubator-mxnet/issues/16770#issuecomment-557016136 Happening again: http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16801/13/pip

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16862: Op Unravel_index PR [Numpy]

2019-11-21 Thread GitBox
haojin2 commented on a change in pull request #16862: Op Unravel_index PR [Numpy] URL: https://github.com/apache/incubator-mxnet/pull/16862#discussion_r348997859 ## File path: python/mxnet/numpy/multiarray.py ## @@ -57,7 +57,7 @@ 'blackman', 'flip', 'around', '

[GitHub] [incubator-mxnet] liuzh91 commented on issue #16879: loss for training and evaluation in estimator could be different

2019-11-21 Thread GitBox
liuzh91 commented on issue #16879: loss for training and evaluation in estimator could be different URL: https://github.com/apache/incubator-mxnet/issues/16879#issuecomment-557005297 > How about introducing a new `evaluation_loss` or `evaluate_loss` argument to the constructor. If it is N

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16865: [numpy]add op insert

2019-11-21 Thread GitBox
haojin2 commented on a change in pull request #16865: [numpy]add op insert URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r348985511 ## File path: src/operator/numpy/np_insert_op-inl.h ## @@ -0,0 +1,638 @@ +/* + * Licensed to the Apache Software Founda

[GitHub] [incubator-mxnet] leezu commented on issue #16879: loss for training and evaluation in estimator could be different

2019-11-21 Thread GitBox
leezu commented on issue #16879: loss for training and evaluation in estimator could be different URL: https://github.com/apache/incubator-mxnet/issues/16879#issuecomment-556997545 How about introducing a new `evaluation_loss` or `evaluate_loss` argument to the constructor. If it is None,

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16774: [Numpy] op empty_like, add nan_to_num to dispatch

2019-11-21 Thread GitBox
haojin2 commented on a change in pull request #16774: [Numpy] op empty_like, add nan_to_num to dispatch URL: https://github.com/apache/incubator-mxnet/pull/16774#discussion_r348973212 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -39,7 +39,7 @@ 'around'

[GitHub] [incubator-mxnet] haojin2 commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv

2019-11-21 Thread GitBox
haojin2 commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-556989622 Happening again: http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cp

[incubator-mxnet] branch v1.6.x updated: fix flakiness of test_np_mixed_precision_binary_funcs (#16873)

2019-11-21 Thread haoj
This is an automated email from the ASF dual-hosted git repository. haoj pushed a commit to branch v1.6.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.6.x by this push: new 530bd27 fix flakiness of test_np_mixed_p

[GitHub] [incubator-mxnet] haojin2 merged pull request #16873: Fix flakiness of test_np_mixed_precision_binary_funcs

2019-11-21 Thread GitBox
haojin2 merged pull request #16873: Fix flakiness of test_np_mixed_precision_binary_funcs URL: https://github.com/apache/incubator-mxnet/pull/16873 This is an automated message from the Apache Git Service. To respond to the

[GitHub] [incubator-mxnet] liuzh91 opened a new issue #16879: loss for training and evaluation in estimator could be different

2019-11-21 Thread GitBox
liuzh91 opened a new issue #16879: loss for training and evaluation in estimator could be different URL: https://github.com/apache/incubator-mxnet/issues/16879 ## Description In current estimator implementation, fit_batch and evaluate_batch use the same loss function. Code snippet in

[GitHub] [incubator-mxnet] zburning opened a new pull request #16878: add micro to pearsonr

2019-11-21 Thread GitBox
zburning opened a new pull request #16878: add micro to pearsonr URL: https://github.com/apache/incubator-mxnet/pull/16878 ## Description ## add micro to pearson correlation coefficient. ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your

[GitHub] [incubator-mxnet] haojin2 commented on a change in pull request #16813: add numpy op diagflat [numpy]

2019-11-21 Thread GitBox
haojin2 commented on a change in pull request #16813: add numpy op diagflat [numpy] URL: https://github.com/apache/incubator-mxnet/pull/16813#discussion_r348943326 ## File path: src/operator/numpy/np_matrix_op.cc ## @@ -1325,5 +1326,27 @@ NNVM_REGISTER_OP(_backward_np_diag

  1   2   >