xidulu commented on a change in pull request #16876: [Numpy] Implementation
npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349462869
##
File path: tests/nightly/test_np_random.py
##
@@ -0,0 +1,83 @@
+# Licensed to the Apache Software F
kshitij12345 commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-557421604
@apeforest Sure no worries. Thanks.
This is an automated message fr
sxjscience commented on issue #16876: [Numpy] Implementation npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#issuecomment-557421312
LGTM
This is an automated message from the Apache Git Service.
To res
sxjscience commented on a change in pull request #16876: [Numpy] Implementation
npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349458918
##
File path: tests/nightly/test_np_random.py
##
@@ -0,0 +1,83 @@
+# Licensed to the Apache Softwa
liuzh91 opened a new pull request #16888: Add evaluation_loss to the estimator
base class.
URL: https://github.com/apache/incubator-mxnet/pull/16888
## Description ##
[Bug_fix] Add evaluation loss member in estimator class. The purpose of add
the evaluation loss is to decouple the train
sxjscience opened a new issue #16887: [Numpy] Bug of basic indexing
URL: https://github.com/apache/incubator-mxnet/issues/16887
Found this bug when writing random test cases for symbolic indexing.
```python
import mxnet as mx
from mxnet import gluon
mx.npx.set_np()
a =
xidulu commented on a change in pull request #16876: [Numpy] Implementation
npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349456580
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -2669,6 +2669,45 @@ def hybrid_forward(self
reminisce commented on issue #16824: Enable unit tests for TVM ops for all cuda
compute capabilities
URL: https://github.com/apache/incubator-mxnet/pull/16824#issuecomment-557414420
@ptrendx This does not affect 1.6. We plan not to release TVM powered
operators in 1.6. If you see the inval
adis300 commented on issue #15303: Fix amalgamation failure.
URL: https://github.com/apache/incubator-mxnet/pull/15303#issuecomment-557404304
@marcoabreu @TaoLv I have just rebased the feature onto the latest master
branch and resolved related conflicts.
---
haojin2 opened a new pull request #16886: [DO NOT MERGE] [DO NOT REVIEW]
boolean_mask_assign with start_axis
URL: https://github.com/apache/incubator-mxnet/pull/16886
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please fee
leezu commented on a change in pull request #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349432215
##
File path: python/mxnet/metric.py
##
@@ -1438,13 +1449,46 @@ class PearsonCorrelation(EvalMetric):
>>> pr = mx.m
leezu commented on a change in pull request #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349432428
##
File path: python/mxnet/metric.py
##
@@ -1457,16 +1501,37 @@ def update(self, labels, preds):
Predicted
zburning commented on issue #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878#issuecomment-557382220
Actually, I also test the run time performance locally. But the current
test_metric_perf.py doesn't test micro performance. Do you think it's necessary
jeremiedb edited a comment on issue #15994: ONNX import/export: Upsampling
URL: https://github.com/apache/incubator-mxnet/pull/15994#issuecomment-557378384
Any development? Also facing an Upsampling issue trying to import:
https://github.com/onnx/models/blob/master/vision/style_transfer
jeremiedb commented on issue #15994: ONNX import/export: Upsampling
URL: https://github.com/apache/incubator-mxnet/pull/15994#issuecomment-557378384
Any development? Also facing an Upsampling issue trying to import:
https://github.com/onnx/models/blob/master/vision/style_transfer/fast_n
szha commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557372476
I was referring the the instructions just below the lines you were quoting:
> If you have any item that you'd like to propose to have in the r
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 200f0ec [v1.6.x] Backport #16837 into v1.6.x (#16847)
add e73c186 Backport #16798, #16836 and #16838
ptrendx commented on issue #16872: Backport #16856 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16872#issuecomment-557372096
@stu1130 please rebase this on top of current 1.6.x branch, which has the
necessary commit.
---
cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557372008
> @cjolivier01 @pengzhao-intel @ptrendx would you mind opening a feature
request issue as suggested by the initial post? The roadmap issue is
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 200f0ec [v1.6.x] Backport #16837 into v1.6.x (#16847)
add e73c186 Backport #16798, #16836 and #16838
ptrendx merged pull request #16874: Backport #16798, #16836 and #16838 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16874
This is an automated message from the Apache Git Service.
To respond to the message, plea
xidulu commented on a change in pull request #16876: [Numpy] Implementation
npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349417232
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -2669,6 +2669,45 @@ def hybrid_forward(self
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the
wrong result of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162
##
File path: src/operator/tensor/elemwise_unary_op.
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the
wrong result of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162
##
File path: src/operator/tensor/elemwise_unary_op.
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the
wrong result of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162
##
File path: src/operator/tensor/elemwise_unary_op.
access2rohit opened a new pull request #16885: [WIP]Multi Precision Lamb Update
operator
URL: https://github.com/apache/incubator-mxnet/pull/16885
## Description ##
adding to new operators:
- mp_lamb_update_pahse1
- mp_lamb_update_pahse1
Link to paper: https://arxiv.
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the
wrong result of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162
##
File path: src/operator/tensor/elemwise_unary_op.
wkcn commented on a change in pull request #16884: [Backport][v1.6.x] Fix the
wrong result of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884#discussion_r349416162
##
File path: src/operator/tensor/elemwise_unary_op.
wkcn opened a new pull request #16884: [Backport][v1.6.x] Fix the wrong result
of sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16884
Hi, there.
In v1.6.x, there is a bug of reduce operators when the inputs contain inf
and n
Tommliu commented on a change in pull request #16862: Op Unravel_index PR
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16862#discussion_r349409841
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -57,7 +57,7 @@
'blackman', 'flip', 'around', '
szha commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557359548
@cjolivier01 @pengzhao-intel @ptrendx would you mind opening a feature
request issue as suggested by the initial post? The roadmap issue is usually
sxjscience commented on issue #16880: Better to flatten the label array in
metric.F1()
URL:
https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557352302
@zburning I think the guideline for the refactoring is to try to follow the
convention of scikit_learn.
--
zburning commented on issue #16880: Better to flatten the label array in
metric.F1()
URL:
https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557351933
Thank you for explanation!
So the current implementation in metric.F1() is not good because it only
support binary cla
sxjscience opened a new pull request #16883: Add arange_like to npx
URL: https://github.com/apache/incubator-mxnet/pull/16883
## Description ##
Move arange_like to npx to support the numpy example of Transformer
## Checklist ##
### Essentials ###
Please feel free to remove in
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 33a3af9 Fix test_gluon.py:test_sync_batchnorm when number of GPUS > 4
(#16835)
add 200f0ec [v1.6
pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557349460
cc @TaoLv
This is an automated message from the Apache Git Servi
pengzhao-intel commented on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557349398
@rongzha1 please try to run the script and verify the CPU performance.
--
pengzhao-intel commented on issue #16847: [v1.6.x] Backport #16837 into v1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/16847#issuecomment-557349164
Merging now. thanks @ptrendx
This is an automated message from th
pengzhao-intel merged pull request #16847: [v1.6.x] Backport #16837 into v1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/16847
This is an automated message from the Apache Git Service.
To respond to the message, pl
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new aefff7f Bump the publis
ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557335295
Ok, so @haojin2 please make a PR to 1.6.x branch with both #16827 and #16791.
---
wkcn edited a comment on issue #16881: Add TypeFlag=>string macro
URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-557333251
I prefer to add the type name in DataType class, and get the type name from
‘mshadow::DataType\::kName’.
https://github.com/apache/incubator
wkcn commented on issue #16881: Add TypeFlag=>string macro
URL: https://github.com/apache/incubator-mxnet/pull/16881#issuecomment-557333251
I prefer to add the type name in DataType class.
https://github.com/apache/incubator-mxnet/blob/master/3rdparty/mshadow/mshadow/base.h#L321
---
wkcn commented on a change in pull request #16881: Add TypeFlag=>string macro
URL: https://github.com/apache/incubator-mxnet/pull/16881#discussion_r349383907
##
File path: include/mxnet/base.h
##
@@ -85,6 +85,18 @@
*/
#define PROFILER_MESSAGE_FUNCNAME (__FUNCTION__)
+/
haojin2 commented on issue #16827: Refactor NumPy-compatible elemwise broadcast
operators
URL: https://github.com/apache/incubator-mxnet/pull/16827#issuecomment-557325172
@ptrendx
This is an automated message from the Apache
haojin2 commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557325048
@ptrendx There's a separate PR #16827 that is needed to fix such issues.
#16827 made a major refactor to the np_elemwise_binary_broad
ptrendx commented on issue #16874: Backport #16798, #16836 and #16838 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557324624
Due to problems with compilation on Windows I removed #16791 from this bulk
of cherry-picks. @haojin2 Please make a separate PR to br
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new 33a3af9 Fix test_gluon.py:test_sync_b
ptrendx merged pull request #16835: Fix test_gluon.py:test_sync_batchnorm when
number of GPUS > 4
URL: https://github.com/apache/incubator-mxnet/pull/16835
This is an automated message from the Apache Git Service.
To respond
ptrendx commented on issue #16824: Enable unit tests for TVM ops for all cuda
compute capabilities
URL: https://github.com/apache/incubator-mxnet/pull/16824#issuecomment-557322996
This affects 1.6, right? I encountered similar errors
(`CUDA_ERROR_INVALID_PTX`) in testing of my unrelated PR
larroy commented on issue #16835: Fix test_gluon.py:test_sync_batchnorm when
number of GPUS > 4
URL: https://github.com/apache/incubator-mxnet/pull/16835#issuecomment-557322878
@ptrendx
This is an automated message from the
ptrendx commented on issue #16796: Add support for boolean inputs to FusedOp
URL: https://github.com/apache/incubator-mxnet/pull/16796#issuecomment-557312888
@marcoabreu @larroy Could you tell me what is the configuration of the
unix-gpu test runners? They make TVM error out and I cannot re
larroy commented on issue #16753: fail to build using docker
URL:
https://github.com/apache/incubator-mxnet/issues/16753#issuecomment-557293733
I built the latest from master without any problems, did you update
submodules?
```
time ci/build.py -p armv7
...
2019-11-21 2
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r349305905
##
File path: python/mxnet/optimizer/optimizer.py
##
@@ -1244,6 +1244,54 @@ def update(self, index, weight, g
zeeshansayyed commented on issue #16882: Gradient clipping across multiple GPUs
URL:
https://github.com/apache/incubator-mxnet/issues/16882#issuecomment-557254618
The example which I found was using `gluonnlp.utils.clip_grad_global_norm`
as follows:
```python
trainer.allreduce_gr
marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit
toolchain to fix 'out of heap space'.
URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557249348
It seems like I was under the impression that we are dropping support of
some visual studio version
ptrendx commented on issue #16872: Backport #16856 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16872#issuecomment-557246193
This PR relies on #16847
This is an automated message from the Apache Git Service.
To
ptrendx commented on issue #16874: Backport #16798, #16791 and #16838 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/16874#issuecomment-557243763
@haojin2 Windows build failed with `fatal error C1002: compiler is out of
heap space in pass 2` - did you do anything in the other PR
zeeshansayyed opened a new issue #16882: Gradient clipping across multiple GPUs
URL: https://github.com/apache/incubator-mxnet/issues/16882
Hello,
Can someone please point me to an example where gradient clipping can be
performed on multiple GPUs.
Thanks
Zeeshan
--
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ece027c add numpy op diagflat [numpy] (#16813)
add 4da14a2 add op bitwise_or [numpy] (#16801)
No new r
sxjscience commented on a change in pull request #16876: [Numpy] Implementation
npx.{sample}_n
URL: https://github.com/apache/incubator-mxnet/pull/16876#discussion_r349255827
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -2669,6 +2669,45 @@ def hybrid_forward(
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ece027c add numpy op diagflat [numpy] (#16813)
add 4da14a2 add op bitwise_or [numpy] (#16801)
No new r
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all
nodes in a graph (#16836)
add ece027
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ece027c add numpy op diagflat [numpy] (#16813)
add 4da14a2 add op bitwise_or [numpy] (#16801)
No new r
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all
nodes in a graph (#16836)
add ece027
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b63420c Bump the publis
haojin2 merged pull request #16801: add op bitwise_or [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16801
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a8b31a2 Fix InferAttr/InferShapeAttr not calling inference for all
nodes in a graph (#16836)
add ece027
haojin2 merged pull request #16813: add numpy op diagflat [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16813
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
DickJC123 commented on issue #15882: Move Windows CI build to a 64-bit
toolchain to fix 'out of heap space'.
URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557216520
@marcoabreu Sounds like if I resubmitted the core of this PR, you'd support
it. Anything specific b
sxjscience commented on issue #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878#issuecomment-557215468
Nice add! Could you also add the test here?
https://github.com/apache/incubator-mxnet/blob/a8b31a239f5d5ed0ebff0f3be44b5e5534e0b3f5/tests/python/unitt
sxjscience commented on a change in pull request #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878#discussion_r349245753
##
File path: python/mxnet/metric.py
##
@@ -1438,13 +1449,46 @@ class PearsonCorrelation(EvalMetric):
>>> pr =
DickJC123 commented on issue #16831: [CI] Python2: CPU - hangs after
test_create_np_param
URL:
https://github.com/apache/incubator-mxnet/issues/16831#issuecomment-557210749
I was under the impression that when a PR goes through CI, the code tested
is a merge of the PR with the then-curren
sxjscience commented on issue #16880: Better to flatten the label array in
metric.F1()
URL:
https://github.com/apache/incubator-mxnet/issues/16880#issuecomment-557207181
We will have label shape = (B, N_labels) in multi-label classification
problems, e.g., the PPI dataset used in Graph Ne
cjolivier01 commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557204872
> XLA is effectively dead at this point so I'm not sure why we would want to
invest in that. MLIR is not really ready for prime time. Out of
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557201298
Actually, i don't know what this issue is about. There's no actual report of
a problem in the description.
---
cjolivier01 removed a comment on issue #11417: libomp.so dependency (need REAL
fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196823
btw:
```
[chriso@chriso-dev:/opt/python3.6b]ldd
./lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_i
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196823
btw:
```
[chriso@chriso-dev:/opt/python3.6b]ldd
./lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.s
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-557196011
I always see this junk in there as well, but doesn't necesarily mean it'll
llnk it:
```cmake
OpenMP_CXX_LIB_NAMES:STRING=gomp
ptrendx commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-557193983
XLA is effectively dead at this point so I'm not sure why we would want to
invest in that. MLIR is not really ready for prime time. Out of all of
marcoabreu commented on issue #15882: Move Windows CI build to a 64-bit
toolchain to fix 'out of heap space'.
URL: https://github.com/apache/incubator-mxnet/pull/15882#issuecomment-557171434
Happy to move forward with the upgrade to 64bit
---
jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557149820
I have also uploaded the scripts to:
[Here](https://github.com/jonatan1626/mxnet-performance-benchmark/tree/master).
Do let me know
jonatan1626 edited a comment on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557144952
@pengzhao-intel The runs just finished there was an error when running
resnet50_v1, so I have restarted the job and will post
jonatan1626 commented on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-557144952
@pengzhao-intel The runs just finished there was an error when running
resnet50_v1, so I have restarted the job and will post the res
Kh4L opened a new pull request #16881: Add TypeFlag=>string macro
URL: https://github.com/apache/incubator-mxnet/pull/16881
## Description ##
Add a macro mapping mshadow type_flag to strings, to improve debuggability.
## Checklist ##
### Essentials ###
Please feel free to re
zburning opened a new issue #16880: Better to flatten the label array in
metric.F1()
URL: https://github.com/apache/incubator-mxnet/issues/16880
## Description
Unlike the other metrics, the current metric.F1() doesn't flatten the label.
Commonly the label would have the shape of (ba
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b898a0f Bump the publis
haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d
URL:
https://github.com/apache/incubator-mxnet/issues/16770#issuecomment-557016446
@ptrendx @DickJC123 This is happening quite often for TensorRT tests, can
you guys probably take a look? I believe it could also be
haojin2 commented on issue #16770: Flaky test: test_ops.test_convolution2d
URL:
https://github.com/apache/incubator-mxnet/issues/16770#issuecomment-557016136
Happening again:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16801/13/pip
haojin2 commented on a change in pull request #16862: Op Unravel_index PR
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16862#discussion_r348997859
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -57,7 +57,7 @@
'blackman', 'flip', 'around', '
liuzh91 commented on issue #16879: loss for training and evaluation in
estimator could be different
URL:
https://github.com/apache/incubator-mxnet/issues/16879#issuecomment-557005297
> How about introducing a new `evaluation_loss` or `evaluate_loss` argument
to the constructor. If it is N
haojin2 commented on a change in pull request #16865: [numpy]add op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865#discussion_r348985511
##
File path: src/operator/numpy/np_insert_op-inl.h
##
@@ -0,0 +1,638 @@
+/*
+ * Licensed to the Apache Software Founda
leezu commented on issue #16879: loss for training and evaluation in estimator
could be different
URL:
https://github.com/apache/incubator-mxnet/issues/16879#issuecomment-556997545
How about introducing a new `evaluation_loss` or `evaluate_loss` argument to
the constructor. If it is None,
haojin2 commented on a change in pull request #16774: [Numpy] op empty_like,
add nan_to_num to dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16774#discussion_r348973212
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
'around'
haojin2 commented on issue #16830: CI error in unix gpu
test_quantization_gpu.test_quantized_conv
URL:
https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-556989622
Happening again:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cp
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new 530bd27 fix flakiness of test_np_mixed_p
haojin2 merged pull request #16873: Fix flakiness of
test_np_mixed_precision_binary_funcs
URL: https://github.com/apache/incubator-mxnet/pull/16873
This is an automated message from the Apache Git Service.
To respond to the
liuzh91 opened a new issue #16879: loss for training and evaluation in
estimator could be different
URL: https://github.com/apache/incubator-mxnet/issues/16879
## Description
In current estimator implementation, fit_batch and evaluate_batch use the
same loss function.
Code snippet in
zburning opened a new pull request #16878: add micro to pearsonr
URL: https://github.com/apache/incubator-mxnet/pull/16878
## Description ##
add micro to pearson correlation coefficient.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your
haojin2 commented on a change in pull request #16813: add numpy op diagflat
[numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16813#discussion_r348943326
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -1325,5 +1326,27 @@ NNVM_REGISTER_OP(_backward_np_diag
1 - 100 of 110 matches
Mail list logo