leezu commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555882743
@atiqsayyed this may be feasible for MXNet 1.6 release if you are willing to
work on it. You can comment in #16438 and ping some yzhliu, nswamy or
leezu edited a comment on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555882743
@atiqsayyed this may be feasible for MXNet 1.6 release if you are willing to
work on it. You can comment in #16438 and ping yzhliu, nswamy
atiqsayyed commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555878222
For Scala 2.12 release
- https://github.com/apache/incubator-mxnet/issues/16438
- with growing scala community, the current version
pengzhao-intel edited a comment on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555852930
For MKLDNN backend
- MKLDNN as the default CPU binary distribution
pengzhao-intel edited a comment on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555852930
For MKLDNN backend
- Propose MKLDNN as the default CPU binary
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from f1c6880 Fix a few np issues (#16849)
add 61c8baf Add unoptimized symbol to executor for sharing
ptrendx merged pull request #16798: Add unoptimized symbol to executor for
sharing
URL: https://github.com/apache/incubator-mxnet/pull/16798
This is an automated message from the Apache Git Service.
To respond to the
ptrendx closed issue #16785: keras-mxnet training failed with FusedOP
URL: https://github.com/apache/incubator-mxnet/issues/16785
This is an automated message from the Apache Git Service.
To respond to the message, please
pango99 opened a new issue #16867: Question about using the handle returned by
MXPredReshape
URL: https://github.com/apache/incubator-mxnet/issues/16867
Hi, I'm using mxnet C API and ArcFace model to do face feature encoding, I
found the API MXPredReshape() can change the input shape of
vexilligera commented on a change in pull request #16800: [Numpy] Add NumPy
support for np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r347139368
##
File path: src/operator/tensor/la_op-inl.h
##
@@ -921,6 +927,9 @@
xinyu-intel opened a new pull request #16866: [DO NOT MERGE] Test
quantized_conv flaky case
URL: https://github.com/apache/incubator-mxnet/pull/16866
## Description ##
#16830
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
-
leezu commented on issue #16863: PearsonCorrelation doesn't support micro
URL:
https://github.com/apache/incubator-mxnet/issues/16863#issuecomment-555863480
Thanks @zburning. I think this is a grave problem and the lack of current
'micro' support should be considered a bug.
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a4d5b3f Bump the
sxjscience commented on issue #16845: MXNet 1.6.0 performance regression
URL:
https://github.com/apache/incubator-mxnet/issues/16845#issuecomment-555863138
Is the performance worse if we turned on hybridization?
This is an
pengzhao-intel commented on issue #16864: [Discussion] 1.7.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/16864#issuecomment-555852930
For MKLDNN backend
- Propose MKLDNN as the default CPU binary
JiangZhaoh opened a new pull request #16865: [numpy]add op insert
URL: https://github.com/apache/incubator-mxnet/pull/16865
## Description ##
add op: insert
ONLY forward
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ]
zixuanweeei commented on issue #16604: RNN op with dropout cannot use fixed
seed on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-555851063
> On second though, maybe I can fix rnn on CPU first, I will leave GPU logic
to the original implementation here:
pengzhao-intel opened a new issue #16864: [Discussion] 1.7.0 Roadmap
URL: https://github.com/apache/incubator-mxnet/issues/16864
Let's start a discussion here about the roadmap towards 1.7.0. We are
looking for:
New features that are useful to your research and development.
zeakey commented on issue #7375: Can I set instance weight when training?
URL:
https://github.com/apache/incubator-mxnet/issues/7375#issuecomment-555849802
I face the same problem.
I think the situation @regzhuce mentioned can be abstrated as: mannually
assign weights to the loss of
zburning opened a new issue #16863: PearsonCorrelation doesn't support micro
URL: https://github.com/apache/incubator-mxnet/issues/16863
## Description
Currently, mxnet.metric.PearsonCorrelation() does not support micro, which
is necessary during evaluation.
It's quite straight
pengzhao-intel closed issue #13221: Flaky tests:
test_gluon_model_zoo_gpu.test_training
URL: https://github.com/apache/incubator-mxnet/issues/13221
This is an automated message from the Apache Git Service.
To respond to the
pengzhao-intel closed issue #16113: [Flaky] test_mkldnn.test_activation
URL: https://github.com/apache/incubator-mxnet/issues/16113
This is an automated message from the Apache Git Service.
To respond to the message, please
pengzhao-intel commented on issue #16113: [Flaky] test_mkldnn.test_activation
URL:
https://github.com/apache/incubator-mxnet/issues/16113#issuecomment-555841082
closing since we don't see the issue for a long time.
Feel free to reopen.
pengzhao-intel closed issue #15032: FC with bias ndim > 1 fails with MKLDNN
URL: https://github.com/apache/incubator-mxnet/issues/15032
This is an automated message from the Apache Git Service.
To respond to the message,
pengzhao-intel commented on issue #14979: [BUG] Using a package with MKL and
GPU versions, using python to open a new process will cause an error
URL:
https://github.com/apache/incubator-mxnet/issues/14979#issuecomment-555840596
closing since we remove the MKL dependency.
pengzhao-intel commented on issue #15032: FC with bias ndim > 1 fails with
MKLDNN
URL:
https://github.com/apache/incubator-mxnet/issues/15032#issuecomment-555840467
fixed and closing
This is an automated message from the
pengzhao-intel closed issue #14979: [BUG] Using a package with MKL and GPU
versions, using python to open a new process will cause an error
URL: https://github.com/apache/incubator-mxnet/issues/14979
This is an automated
pengzhao-intel commented on issue #15294: mkldnn is not properly installed
URL:
https://github.com/apache/incubator-mxnet/issues/15294#issuecomment-555839888
https://github.com/apache/incubator-mxnet/pull/16731
This is an
pengzhao-intel closed issue #15294: mkldnn is not properly installed
URL: https://github.com/apache/incubator-mxnet/issues/15294
This is an automated message from the Apache Git Service.
To respond to the message, please log
haozeze closed issue #16853: wrong when Build MXNet with cmake
URL: https://github.com/apache/incubator-mxnet/issues/16853
This is an automated message from the Apache Git Service.
To respond to the message, please log on
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 55fe7c5 Fix IndentationError in setup.py (#16857)
add f1c6880 Fix a few np issues (#16849)
No
reminisce merged pull request #16849: Fix a few np issues
URL: https://github.com/apache/incubator-mxnet/pull/16849
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
reminisce closed issue #16616: [Flaky] test_numpy_op.test_np_einsum
URL: https://github.com/apache/incubator-mxnet/issues/16616
This is an automated message from the Apache Git Service.
To respond to the message, please log
Tommliu opened a new pull request #16862: Op Unravel_index PR [Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16862
## Description ##
Unravel_index pr
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] The PR
gray2bgr commented on issue #16210: Load pre-trained AlexNet ONNX official model
URL:
https://github.com/apache/incubator-mxnet/issues/16210#issuecomment-555813621
I got the same error "KeyError: 'concat1'", how can you solve it
This is an automated email from the ASF dual-hosted git repository.
zachgk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 9889310 Initial checkin (#16856)
add 55fe7c5 Fix IndentationError in setup.py (#16857)
No new
zachgk merged pull request #16857: Fix IndentationError in setup.py
URL: https://github.com/apache/incubator-mxnet/pull/16857
This is an automated message from the Apache Git Service.
To respond to the message, please log on
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 60f53ed [fix] missing input log higher order. (#15331)
add 9889310 Initial checkin (#16856)
No new
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 60f53ed [fix] missing input log higher order. (#15331)
add 9889310 Initial checkin (#16856)
No new
haojin2 merged pull request #16856: Fix zero-size problem in expand_dims MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/16856
This is an automated message from the Apache Git Service.
To respond to the message,
haojin2 closed issue #16850: [Numpy] expand_dims throws delay_alloc error
URL: https://github.com/apache/incubator-mxnet/issues/16850
This is an automated message from the Apache Git Service.
To respond to the message,
haojin2 closed pull request #14644: Speedup concat op
URL: https://github.com/apache/incubator-mxnet/pull/14644
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
haojin2 closed pull request #15984: [DO NOT REVIEW] [DO NOT MERGE] General
reduce compute for tvm ops and TVM version of sum
URL: https://github.com/apache/incubator-mxnet/pull/15984
This is an automated message from the
apeforest commented on issue #16823: [WIP] Upgrade MKL-DNN dependency to v1.1
URL: https://github.com/apache/incubator-mxnet/pull/16823#issuecomment-555795474
I will try it tonight. thx
This is an automated message from the
TEChopra1000 commented on issue #16724: Example link of the image
classification show 404
URL:
https://github.com/apache/incubator-mxnet/issues/16724#issuecomment-555795539
@stereomatchingkiss would you please point me to the page you were on when
you found the broken link?
haojin2 opened a new pull request #16861: Support NumPy-compatible bitwise_and
URL: https://github.com/apache/incubator-mxnet/pull/16861
## Description ##
As title.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] The PR
xinyu-intel edited a comment on issue #11417: libomp.so dependency (need REAL
fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555791907
@cjolivier01 typo, i actually use tao's command line in this issue:
```
cmake3 .. -DUSE_CUDA=0 -DUSE_LAPACK=0
xinyu-intel commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555791907
@cjolivier01 typo, i actually use yao's command line in this issue:
```
cmake3 .. -DUSE_CUDA=0 -DUSE_LAPACK=0
xinyu-intel commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555791537
(base) [chenxiny@mlt2-clx093 ~]$ cmake --version
cmake version 2.8.12.2
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555790585
> cmake2, i'll check cmake3 later:)
what’s actual version? i’ll build on my machine
xinyu-intel commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555790223
cmake2, i'll check cmake3 later:)
This is an automated message
ZhennanQin opened a new pull request #16860: [MKLDNN] enable MaxPooling with
full pooling convention
URL: https://github.com/apache/incubator-mxnet/pull/16860
## Description ##
It seems MKLDNN supports MaxPooling with full pooling convention.
Mobilenet_v3 uses this and the accuracy
haojin2 opened a new pull request #16859: Mixed-type mx.np.power
URL: https://github.com/apache/incubator-mxnet/pull/16859
## Description ##
Support `np.power` for inputs with different data types.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable
eric-haibin-lin opened a new issue #16858: Cannot load trainer with AMP
URL: https://github.com/apache/incubator-mxnet/issues/16858
test.py:
```
import mxnet as mx
import os
import logging
from fp16_utils import LAMB2
net = mx.gluon.nn.Dense(10, in_units=10)
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new c745386 Bump the
zachgk opened a new pull request #16857: Fix IndentationError in setup.py
URL: https://github.com/apache/incubator-mxnet/pull/16857
## Description ##
Fix the setup.py file indentation error in CD
reminisce commented on issue #16856: Fix zero-size problem in expand_dims MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/16856#issuecomment-555766396
@stu1130
This is an automated message from the Apache Git
reminisce opened a new pull request #16856: Fix zero-size problem in
expand_dims MKLDNN
URL: https://github.com/apache/incubator-mxnet/pull/16856
Fixed https://github.com/apache/incubator-mxnet/issues/16850. Added test.
This
junrushao1994 commented on issue #16836: Fix InferAttr/InferShapeAttr not
calling inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836#issuecomment-555761475
@ptrendx At the time that I was writing this piece of code, operators do
return float32,
junrushao1994 commented on issue #16836: Fix InferAttr/InferShapeAttr not
calling inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836#issuecomment-555761648
@ptrendx I think leaving it -1 is a nice proposal. Could you fix this?
Thanks!
ptrendx commented on issue #16798: Add unoptimized symbol to executor for
sharing
URL: https://github.com/apache/incubator-mxnet/pull/16798#issuecomment-555754269
@roywei With the latest version of this PR your script passes for me locally
- could you validate?
ptrendx commented on issue #16559: Tracking mxnet.numpy operator issues for
1.6.0 release
URL:
https://github.com/apache/incubator-mxnet/issues/16559#issuecomment-555748559
Ok, if those issues are fixed could you close this then?
ptrendx commented on issue #16836: Fix InferAttr/InferShapeAttr not calling
inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836#issuecomment-555746778
Not sure I understand @junrushao1994 - I believe the problem here is that
you are forcing the
junrushao1994 edited a comment on issue #16836: Fix InferAttr/InferShapeAttr
not calling inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836#issuecomment-555736904
At the time I wrote the control flow code, it is still infeasible to have
int64
junrushao1994 commented on issue #16836: Fix InferAttr/InferShapeAttr not
calling inference for all nodes in a graph
URL: https://github.com/apache/incubator-mxnet/pull/16836#issuecomment-555736904
At the time I wrote the control flow code, it is still infeasible to have
int64 output for
haojin2 commented on issue #16801: add op bitwise_or [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16801#issuecomment-555727055
Please address the comments and get rid of the change of tvm submodule.
This is an
haojin2 commented on a change in pull request #16801: add op bitwise_or [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16801#discussion_r348181196
##
File path: src/operator/mshadow_op.h
##
@@ -1327,6 +1329,7 @@ struct lcm : public mxnet_op::tunable {
}
};
cjolivier01 edited a comment on issue #11417: libomp.so dependency (need REAL
fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555708539
> @cjolivier01 Hi, I also encounter this issue:(
>
> ```
> [chenxiny@mlt2-clx093 build]$ ldd libmxnet.so | grep
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-555708539
> @cjolivier01 Hi, I also encounter this issue:(
>
> ```
> [chenxiny@mlt2-clx093 build]$ ldd libmxnet.so | grep omp
>
reminisce commented on issue #16559: Tracking mxnet.numpy operator issues for
1.6.0 release
URL:
https://github.com/apache/incubator-mxnet/issues/16559#issuecomment-555704936
@ptrendx All issues should have been fixed by @haojin2. Thanks.
haojin2 commented on a change in pull request #16813: add numpy op diagflat
[numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16813#discussion_r348150163
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -4140,6 +4140,47 @@ def dbg(name, data):
haojin2 commented on a change in pull request #16813: add numpy op diagflat
[numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16813#discussion_r348150088
##
File path: tests/python/unittest/test_numpy_interoperability.py
##
@@ -1201,6 +1201,35 @@ def
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 4f14bf4 Add large tensor nightly tests for MKL-DNN operators (#16184)
add 60f53ed [fix] missing
apeforest merged pull request #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from e007dcd adding API doc for Lamb Phase 1 and 2
add 169ed69 Speed fused_op compilation by
ChaiBapchya commented on issue #16854: Change Reshape's shape size check to EQ
URL: https://github.com/apache/incubator-mxnet/pull/16854#issuecomment-555657540
Usability wise makes total sense to keep it compatible with Numpy and hence
this fix +1
Git blame on the line shows #7698
ptrendx commented on issue #16798: Add unoptimized symbol to executor for
sharing
URL: https://github.com/apache/incubator-mxnet/pull/16798#issuecomment-555654967
Yup, there was a small issue that the fix worked when doing
`Bind`/`SimpleBind` (as those functions copied the symbol before
sxjscience commented on issue #16855: [Numpy] The argument parser of some
operators cannot parse numpy integers or mx.numpy integers correctly.
URL:
https://github.com/apache/incubator-mxnet/issues/16855#issuecomment-555652085
After discussion with @junrushao1994, we can try to convert
Kh4L commented on issue #16854: Change Reshape's shape size check to EQ
URL: https://github.com/apache/incubator-mxnet/pull/16854#issuecomment-555651416
Looking at the failing tests, it was looks that it was intentional.
> Would be great if previous committers who pushed this code
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 497dba8 Bump the
sxjscience commented on issue #16855: [Numpy] The argument parser of some
operators cannot parse numpy integers or mx.numpy integers correctly.
URL:
https://github.com/apache/incubator-mxnet/issues/16855#issuecomment-555643736
I'll partially solve the problem for the split/hsplit/vsplit
sxjscience opened a new issue #16855: [Numpy] The argument parser of some
operators cannot parse numpy integers or mx.numpy integers correctly.
URL: https://github.com/apache/incubator-mxnet/issues/16855
```python
import mxnet as mx
import numpy as np
mx.npx.set_np()
a =
ptrendx commented on issue #16559: Tracking mxnet.numpy operator issues for
1.6.0 release
URL:
https://github.com/apache/incubator-mxnet/issues/16559#issuecomment-555633936
@reminisce Is there any update to the status of those issues?
ptrendx commented on issue #16704: Nightly tests fail with "Cannot find TVM op
config"
URL:
https://github.com/apache/incubator-mxnet/issues/16704#issuecomment-555633325
Fix merged to both master and 1.6, closing.
This is
ptrendx closed issue #16704: Nightly tests fail with "Cannot find TVM op config"
URL: https://github.com/apache/incubator-mxnet/issues/16704
This is an automated message from the Apache Git Service.
To respond to the
ptrendx commented on issue #16838: USE_NVRTC -> ENABLE_CUDA_RTC to fix maven
build. Add compile-guard to fusion.
URL: https://github.com/apache/incubator-mxnet/pull/16838#issuecomment-555622389
`[03:39:03] /work/mxnet/src/executor/attach_op_execs_pass.cc:355: Neither
FCompute nor
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new 6834d15 [Gluon] Improve estimator
ptrendx merged pull request #16846: Backport #16810
URL: https://github.com/apache/incubator-mxnet/pull/16846
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
Kh4L opened a new pull request #16854: Change Reshape's shape size check to EQ
URL: https://github.com/apache/incubator-mxnet/pull/16854
## Description ##
Change non-recorded `NDArray::Reshape` to accept only shapes with the same
volume, in order to match NumPy behavior.
NumPy
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-79226
oh ok 4.8.5
On Tue, Nov 19, 2019 at 8:09 AM Chris Olivier wrote:
> what version of gcc?
>
> On Tue, Nov 19,
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-78760
what version of gcc?
On Tue, Nov 19, 2019 at 1:29 AM Xinyu Chen wrote:
> @cjolivier01
roywei edited a comment on issue #16604: RNN op with dropout cannot use fixed
seed on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-61348
On second though, maybe I can fix rnn on CPU first, I will leave GPU logic
to the original implementation here:
roywei edited a comment on issue #16604: RNN op with dropout cannot use fixed
seed on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-61348
On second though, maybe I can fix rnn on CPU first, I will leave GPU logic
the same here:
roywei edited a comment on issue #16604: RNN op with dropout cannot use fixed
seed on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-61348
On second though, maybe I can fix rnn on CPU first, I will leave GPU logic
the same here .
roywei commented on issue #16604: RNN op with dropout cannot use fixed seed on
CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-61348
On second though, maybe I can fix rnn on CPU first, I will leave GPU logic
the same
roywei commented on issue #16604: RNN op with dropout cannot use fixed seed on
CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-57128
@zixuanweeei the problems are discussed in this PR:
https://github.com/apache/incubator-mxnet/pull/16532 and this issue:
roywei edited a comment on issue #16604: RNN op with dropout cannot use fixed
seed on CPU
URL:
https://github.com/apache/incubator-mxnet/issues/16604#issuecomment-57128
@zixuanweeei the problems are discussed in this PR:
https://github.com/apache/incubator-mxnet/pull/16532 and this
vexilligera commented on a change in pull request #16800: [Numpy] Add NumPy
support for np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r347139368
##
File path: src/operator/tensor/la_op-inl.h
##
@@ -921,6 +927,9 @@
vexilligera commented on a change in pull request #16800: [Numpy] Add NumPy
support for np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r347139368
##
File path: src/operator/tensor/la_op-inl.h
##
@@ -921,6 +927,9 @@
vexilligera commented on a change in pull request #16800: [Numpy] Add NumPy
support for np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r347139368
##
File path: src/operator/tensor/la_op-inl.h
##
@@ -921,6 +927,9 @@
1 - 100 of 123 matches
Mail list logo