This is an automated email from the ASF dual-hosted git repository.
zhreshold pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from c583e44 fix requantize flaky test (#16709)
add 0c5677e Faster GPU NMS operator (#16542)
No new re
zhreshold merged pull request #16542: Faster GPU NMS operator
URL: https://github.com/apache/incubator-mxnet/pull/16542
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
haojin2 commented on issue #16725: Failed test:
test_gluon_gpu.test_rnn_unroll_variant_length
URL:
https://github.com/apache/incubator-mxnet/issues/16725#issuecomment-549690366
@ptrendx @DickJC123 Could you guys provide some insights to this issue?
Seems like related to the fused ops
---
artor1os opened a new issue #16725: Failed test
URL: https://github.com/apache/incubator-mxnet/issues/16725
test name: test_gluon_gpu.test_rnn_unroll_variant_length
log:
```
test_gluon_gpu.test_rnn_unroll_variant_length ...
Segmentation fault: 11
Stack trace:
sxjscience edited a comment on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549683831
With the help of @xidulu , we have located the root cause of the issue:
The bug is triggered because we have multiple parallel GPU
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 576bcbe Bump the publis
sxjscience commented on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549683831
With the help of @xidulu , we have located the root cause of the issue:
The bug is triggered because we have multiple parallel GPU random
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from b9f3b06 Updated logos. (#16719)
add c583e44 fix requantize flaky test (#16709)
No new revisions
pengzhao-intel merged pull request #16709: Fix requantize flaky test
URL: https://github.com/apache/incubator-mxnet/pull/16709
This is an automated message from the Apache Git Service.
To respond to the message, please log on
vasusingla619 commented on issue #16596: How to initialize a CPU tensor in
custom cu file?
URL:
https://github.com/apache/incubator-mxnet/issues/16596#issuecomment-549662413
Thanks, this was solved!
This is an automated mess
vasusingla619 closed issue #16596: How to initialize a CPU tensor in custom cu
file?
URL: https://github.com/apache/incubator-mxnet/issues/16596
This is an automated message from the Apache Git Service.
To respond to the mes
stereomatchingkiss opened a new issue #16724: Example link of the image
classification show 404
URL: https://github.com/apache/incubator-mxnet/issues/16724
I keep getting error 404 when I try to read the image classification
examples--https://mxnet.apache.org/tutorials/python/predict_image
reminisce commented on issue #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#issuecomment-549655791
@marcoabreu Appreciate your review. I can assure you that Windows is
absolutely not excluded from supporting mixed-precision as Unix. @haojin2 has
ptrendx commented on issue #16723: [Bug] fused_op does not support boolean type
URL:
https://github.com/apache/incubator-mxnet/issues/16723#issuecomment-549639921
I see, this is a newly added type. We will fix this.
This is a
sxjscience opened a new issue #16723: Fuse_op does not support boolean type
URL: https://github.com/apache/incubator-mxnet/issues/16723
@ptrendx I find that the FusedOp does not support the boolean type. The
following script will trigger the error.
```python
import mxnet as mx
knjwhn commented on issue #16557: Where is the place that mxnet call cblas_gemm
if I use openblas?
URL:
https://github.com/apache/incubator-mxnet/issues/16557#issuecomment-549632939
> Hi @knjwhn,
>
>
https://github.com/apache/incubator-mxnet/blob/60d74bc948869588c2f143fd3d55231859d
xidulu commented on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549630402
Clearly, dropout in inference mode affects the random state:
```
>>> mx.random.seed(123)
>>> mx.nd.Dropout(x, cudnn_off=True)
[[1. 1
anirudh2290 commented on issue #16612: Compilation fails in master Cuda
10.1.105 GCC 7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549627214
I agree, it would be worth opening a PR to dmlc-core. Thanks @DickJC123 !
--
ChaiBapchya commented on issue #6493: Tutorials that need improvement
URL:
https://github.com/apache/incubator-mxnet/issues/6493#issuecomment-549626986
I'm guessing the more tutorials we have (on varied topics) the better it is
for our users. Personally, I'd interested in knowing all of th
ChaiBapchya opened a new pull request #16722: Remove unused files in Website doc
URL: https://github.com/apache/incubator-mxnet/pull/16722
## Description ##
After the revamping of the MXNet website, we no longer need
```
python/mxnet/ndarray_doc.py
python/mxnet/symbol_doc.py
ChaiBapchya commented on issue #14243: Fix commands to make doc consistent
URL: https://github.com/apache/incubator-mxnet/pull/14243#issuecomment-549624745
Since the launch of the new website, we don't depend on this file anymore.
Hence closing this as stale PR
ChaiBapchya closed pull request #14243: Fix commands to make doc consistent
URL: https://github.com/apache/incubator-mxnet/pull/14243
This is an automated message from the Apache Git Service.
To respond to the message, please
wuxun-zhang commented on issue #16184: Add large tensor nightly tests for
MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#issuecomment-549621011
@ChaiBapchya @marcoabreu Please take a look again and see if your concerns
are properly resolved. Thanks.
--
larroy edited a comment on issue #16612: Compilation fails in master Cuda
10.1.105 GCC 7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614983
I think it would be user friendly to avoid obscure compilation errors for
users if we can avoid it. Me
larroy commented on issue #16612: Compilation fails in master Cuda 10.1.105 GCC
7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614983
I think it would be user friendly to avoid obscure compilation errors for
users if we can avoid it.
---
DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1.105
GCC 7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549614211
And FYI, if you feel it worth trying to correct this for MXNet users on the
original cuda 10.1, the fix to
aaronmarkham commented on issue #16408: Add MXNet Ops for fast multihead
attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#issuecomment-549613083
> @aaronmarkham is the website preview functionality still working after the
website upgrade? I cannot see the preview of this
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 80bc794 Bump the publis
DickJC123 commented on issue #16685: Memory planner doesn't respect 'output
independence'. More optimizations possible.
URL:
https://github.com/apache/incubator-mxnet/issues/16685#issuecomment-549608267
I have not begun to work on this, and my plate is fairly full, so someone
else can ju
DickJC123 removed a comment on issue #16131: Fix for duplicate subgraph
inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#issuecomment-549511847
I have not begun to work on this, and my plate is fairly full, so someone
else can jump in if they want. The issue can be
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342324211
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_BINA
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342324211
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_BINA
ChaiBapchya commented on issue #11535: installed mxnet-cu92 on ubuntu but can't
run example code correctly
URL:
https://github.com/apache/incubator-mxnet/issues/11535#issuecomment-549593778
@zhuotest you can confirm
But @rohun-tripathi does this help - https://www.nvidia.com/drivers/bet
marcoabreu commented on a change in pull request #16699: Mixed data type binary
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342319668
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_
marcoabreu commented on a change in pull request #16699: Mixed data type binary
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342319668
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_
larroy commented on issue #16612: Compilation fails in master Cuda 10.1 GCC 7.4
Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549589878
I was able to upgrade and the problem went away with the updated CUDA.
-
haojin2 commented on a change in pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342316290
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_BINA
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL:
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549585414
@csharma Would you ask questions here https://discuss.mxnet.io/ ? The issue
page is for bug reports.
--
sxjscience closed issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL: https://github.com/apache/incubator-mxnet/issues/16721
This is an automated message from the Apache Git Service.
To respond to the mess
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 5a2fce5 [WIP][New Op] Add deformable conv v2 (#16341)
add b9f3b06 Updated logos. (#16719)
No new
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 5a2fce5 [WIP][New Op] Add deformable conv v2 (#16341)
add b9f3b06 Updated logos. (#16719)
No new
marcoabreu commented on a change in pull request #16699: Mixed data type binary
ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#discussion_r342309638
##
File path: src/operator/mshadow_op.h
##
@@ -194,6 +194,100 @@ MXNET_BINARY_MATH_OP_NC(right, b);
MXNET_
marcoabreu merged pull request #16719: Updated landing page logos (adding Dely)
URL: https://github.com/apache/incubator-mxnet/pull/16719
This is an automated message from the Apache Git Service.
To respond to the message, pl
This is an automated email from the ASF dual-hosted git repository.
sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bb6305d [MKLDNN] support mkldnn gelu (#16710)
add 5a2fce5 [WIP][New Op] Add deformable conv v2 (#
This is an automated email from the ASF dual-hosted git repository.
sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bb6305d [MKLDNN] support mkldnn gelu (#16710)
add 5a2fce5 [WIP][New Op] Add deformable conv v2 (#
This is an automated email from the ASF dual-hosted git repository.
sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from bb6305d [MKLDNN] support mkldnn gelu (#16710)
add 5a2fce5 [WIP][New Op] Add deformable conv v2 (#
sxjscience merged pull request #16341: [WIP][New Op] Add deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341
This is an automated message from the Apache Git Service.
To respond to the message, please
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342303707
##
File path: tests/python/unittest/test_contrib_operator.py
##
@@ -409,6 +409,42 @@ def test_op_mr
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342303092
##
File path: tests/python/unittest/test_contrib_operator.py
##
@@ -409,6 +409,42 @@ def test_op_mr
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342302212
##
File path: src/operator/contrib/nn/modulated_deformable_im2col.cuh
##
@@ -0,0 +1,541 @@
+/*
+ *
sxjscience commented on a change in pull request #16341: [WIP][New Op] Add
deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#discussion_r342301961
##
File path: src/operator/contrib/nn/modulated_deformable_im2col.cuh
##
@@ -0,0 +1,541 @@
+/*
+ *
zhreshold commented on issue #16341: [WIP][New Op] Add deformable conv v2
URL: https://github.com/apache/incubator-mxnet/pull/16341#issuecomment-549568041
CI passed, training convergence passed, can you guys help merge it since the
furture gluoncv models depends on this PR? @eric-haibin-lin
DickJC123 commented on issue #16612: Compilation fails in master Cuda 10.1 GCC
7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549554260
Yes, I believe this is a problem present in the original cuda 10.1 release
(10.1.105), fixed by 10.1 Update 1
csharma commented on issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL:
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549549560
Add the following line to make it run
from bert_embedding import BertEmbedding
Best,
Cartik
--
Jerryzcn commented on issue #16708: Training an FPN model using grad_req="add"
causes rapid divergence, while manually implemented gradient accumulation works
fine
URL:
https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-549547579
There is also some bugs in grad accumulat
sxjscience commented on issue #16701: Hybridize, conditional operator, and loop
gradient/trainer bug
URL:
https://github.com/apache/incubator-mxnet/issues/16701#issuecomment-549542897
@junrushao1994 @szha @zheng-da
This is
zheng-da commented on issue #16603: Significant slowdown in some DGL models
URL:
https://github.com/apache/incubator-mxnet/issues/16603#issuecomment-549542817
I just tried the experiment again and there is no problem. The command to
run the experiment:
```
python3 train.py --model Di
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL:
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549541781
@csharma There is no `bert_embedding` here in MXNet.
---
csharma commented on issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL:
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549541198
Add this line,
from bert_embedding import BertEmbedding
Yes, the laptop I am using has GPU support with 2
sxjscience commented on issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL:
https://github.com/apache/incubator-mxnet/issues/16721#issuecomment-549538072
@csharma Did the machine you are using have GPU support? Also, the python
code you provided is not runnable.
--
sxjscience commented on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549534705
@DickJC123 You may see that I've manually set `cudnn_off=True`. Also, I
think https://github.com/apache/incubator-mxnet/pull/16532 will solve t
sxjscience commented on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549533626
@DickJC123 The answer should be different because these two dropouts should
share the same internal random number generator and the random stat
csharma opened a new issue #16721: GPU is not enabled for mxnet based word
embeddings.
URL: https://github.com/apache/incubator-mxnet/issues/16721
Hi,
bert_embedding = BertEmbedding(mx.gpu(0))
causes the following error.
Exception has occurred: MXNetError
[15:02:22] C:\J
DickJC123 closed issue #16670: cuDNN RNN dtype_with_fallback_ calc needs update
URL: https://github.com/apache/incubator-mxnet/issues/16670
This is an automated message from the Apache Git Service.
To respond to the message,
DickJC123 commented on issue #16705: Dropout inconsistency bug
URL:
https://github.com/apache/incubator-mxnet/issues/16705#issuecomment-549520723
What behavior do we expect from a model that has two Dropouts, where no
seeds have been set explicitly in advance? Are the dropout patterns ide
stu1130 commented on issue #16670: cuDNN RNN dtype_with_fallback_ calc needs
update
URL:
https://github.com/apache/incubator-mxnet/issues/16670#issuecomment-549518502
@DickJC123 do you think we can close the issue?
My though is that there are two things in the issue, first one was addre
nickguletskii commented on issue #16718: Cleaner API for utilizing all GPUs if
available
URL:
https://github.com/apache/incubator-mxnet/issues/16718#issuecomment-549517741
I think it would be better to introduce a separate function called
`mxnet.all_gpus(): List[mxnet.Context]`, instead o
ddavydenko commented on issue #1: Disable python logging verbose from C++
implementation
URL:
https://github.com/apache/incubator-mxnet/issues/1#issuecomment-549515989
@mxnet-label-bot add [Feature request]
This is a
ddavydenko commented on issue #1: Disable python logging verbose from C++
implementation
URL:
https://github.com/apache/incubator-mxnet/issues/1#issuecomment-549515804
@deHsien , this would be a feature request as currently this not supported.
@mxnet-label-bot add ["Feature Requ
ddavydenko commented on issue #16677: What mode does PRELU support?
URL:
https://github.com/apache/incubator-mxnet/issues/16677#issuecomment-549515021
@mxnet-label-bot add [Question]
This is an automated message from the Apac
DickJC123 commented on issue #16131: Fix for duplicate subgraph inputs/outputs
URL: https://github.com/apache/incubator-mxnet/pull/16131#issuecomment-549511847
I have not begun to work on this, and my plate is fairly full, so someone
else can jump in if they want. The issue can be fixed na
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 264081a Bump the publis
aaronmarkham commented on issue #16719: Updated landing page logos (adding Dely)
URL: https://github.com/apache/incubator-mxnet/pull/16719#issuecomment-549480769
Flaky test failure... reported and restarted the test.
This is a
aaronmarkham commented on issue #16238: [Flaky]
test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/16238#issuecomment-549480414
Failed here:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16719/1/
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342187118
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11
sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342182321
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,1
marcoabreu commented on a change in pull request #16477: added more tests to
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r342182341
##
File path: tests/nightly/test_large_vector.py
##
@@ -708,6 +708,174 @@ def test_f
sxjscience commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342181564
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,1
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342179463
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11
reminisce commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342178728
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11
reminisce commented on a change in pull request #16638: [WIP] [Numpy] Add
sampling method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r342175246
##
File path: python/mxnet/symbol/numpy_extension/random.py
##
@@ -0,0 +1,57 @@
+# Licens
anirudh2290 commented on issue #16612: Compilation fails in master Cuda 10.1
GCC 7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549426440
@hubutui Looks like your issue is unrelated. I don't see issue related to
ThreadLocalStore in your log.
--
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 1019614 Bump the publis
hubutui commented on issue #16612: Compilation fails in master Cuda 10.1 GCC
7.4 Ubuntu 18.04
URL:
https://github.com/apache/incubator-mxnet/issues/16612#issuecomment-549326491
I got a similar issue with ArchLinux, cuda 10.1.243, gcc 8.3.0, opencv
4.1.2. Here is my build log.
[mxn
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986855
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -3386,6 +3386,95 @@ def argmin(a, axis=No
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986579
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -5320,6 +5320,92 @@ def argmin(a, axis=Non
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986777
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -3386,6 +3386,95 @@ def argmin(a, axis=No
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986704
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -5320,6 +5320,92 @@ def argmin(a, axis=Non
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986397
##
File path: python/mxnet/symbol/numpy/_symbol.py
##
@@ -3355,6 +3355,94 @@ def argmin(a, axis
artor1os commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341986171
##
File path: python/mxnet/symbol/numpy/_symbol.py
##
@@ -3355,6 +3355,94 @@ def argmin(a, axis
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 94aab39 [Quantization] Enhance gluon quantization API (#16695)
add bb6305d [MKLDNN] support mkldn
pengzhao-intel merged pull request #16710: [MKLDNN] support mkldnn gelu
URL: https://github.com/apache/incubator-mxnet/pull/16710
This is an automated message from the Apache Git Service.
To respond to the message, please log
fumingxing2015 closed issue #16562: Same model but different time-consuming
URL: https://github.com/apache/incubator-mxnet/issues/16562
This is an automated message from the Apache Git Service.
To respond to the message, ple
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341962676
##
File path: src/operator/numpy/np_broadcast_reduce_op.h
##
@@ -398,6 +399,353 @@ void ReduceAx
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341962909
##
File path: src/operator/numpy/np_broadcast_reduce_op.h
##
@@ -398,6 +399,353 @@ void ReduceAx
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961812
##
File path: src/operator/numpy/np_broadcast_reduce_op.h
##
@@ -398,6 +399,353 @@ void ReduceAx
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961427
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -597,6 +597,98 @@ def _test_np_except
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341961190
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -597,6 +597,98 @@ def _test_np_except
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341960828
##
File path: src/operator/numpy/np_broadcast_reduce_op_value.cc
##
@@ -249,6 +250,76 @@ inline
haojin2 commented on a change in pull request #16720: [Numpy] Implement numpy
operator 'average'
URL: https://github.com/apache/incubator-mxnet/pull/16720#discussion_r341960103
##
File path: src/operator/numpy/np_broadcast_reduce_op.h
##
@@ -398,6 +399,353 @@ void ReduceAx
1 - 100 of 106 matches
Mail list logo