ciyongch commented on issue #17183: [MKL-DNN] MKL-DNN RNN backward path
enhancement
URL: https://github.com/apache/incubator-mxnet/pull/17183#issuecomment-570487519
@zixuanweeei Do you have any clue about the perf degradeation on forward
pass? As the PR also done some refactor to the
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 4f9890f [tvmop] link libtvmop with libtvm_runtime (#17203)
add 8e946c9 Implement atleast_1d/2d/3d
reminisce merged pull request #17099: [Numpy] Implement atleast_1d/2d/3d
URL: https://github.com/apache/incubator-mxnet/pull/17099
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 89fe1f6 [MKLDNN] Support channel wise quantization for FullyConnected
(#17187)
add 4f9890f
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 89fe1f6 [MKLDNN] Support channel wise quantization for FullyConnected
(#17187)
add 4f9890f
reminisce merged pull request #17203: [tvmop] link libtvmop with libtvm_runtime
URL: https://github.com/apache/incubator-mxnet/pull/17203
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new b612807 Bump the
reminisce commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362704384
##
File path: include/mxnet/c_api.h
##
@@ -643,14 +643,14 @@ MXNET_DLL int MXNDArrayCreateSparseEx(int
TaoLv commented on issue #17183: [MKL-DNN][WIP] MKL-DNN RNN backward path
enhancement
URL: https://github.com/apache/incubator-mxnet/pull/17183#issuecomment-570459088
@zixuanweeei Please remove [WIP] from the title and retrigger CI. @ciyongch
please help to review. Thanks!
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 2a9ec0e Softmax primitive cache and in-place computation (#17152)
add 89fe1f6 [MKLDNN] Support
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 2a9ec0e Softmax primitive cache and in-place computation (#17152)
add 89fe1f6 [MKLDNN] Support
TaoLv merged pull request #17187: [MKLDNN] Support channel wise quantization
for FullyConnected
URL: https://github.com/apache/incubator-mxnet/pull/17187
This is an automated message from the Apache Git Service.
To respond
TaoLv commented on issue #17187: [MKLDNN] Support channel wise quantization for
FullyConnected
URL: https://github.com/apache/incubator-mxnet/pull/17187#issuecomment-570458037
Thank you for the contribution. Merging now~
Rainweic commented on issue #17164: net.Cast("float16") doesn't work: Check
failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform
type. Expected 'float32' v.s. given 'float16' at 'gamma'
URL:
https://github.com/apache/incubator-mxnet/issues/17164#issuecomment-570436634
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new dafbb11 fix norm sparse fallback
eric-haibin-lin merged pull request #17202: Backport PR 17149 to 1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/17202
This is an automated message from the Apache Git Service.
To respond to the message, please log
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 6e725de Bump the
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 77c7c3a Enhancements for custom subgraph op (#17194)
add 2a9ec0e Softmax primitive cache and
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 77c7c3a Enhancements for custom subgraph op (#17194)
add 2a9ec0e Softmax primitive cache and
pengzhao-intel commented on issue #17138: Interleaved MHA for CPU path
URL: https://github.com/apache/incubator-mxnet/pull/17138#issuecomment-570422621
Is this PR good to merge? @TaoLv @eric-haibin-lin
This is an automated
pengzhao-intel merged pull request #17152: Softmax primitive cache and in-place
computation
URL: https://github.com/apache/incubator-mxnet/pull/17152
This is an automated message from the Apache Git Service.
To respond to
ciyongch commented on issue #17187: [MKLDNN] Support channel wise quantization
for FullyConnected
URL: https://github.com/apache/incubator-mxnet/pull/17187#issuecomment-570421812
@TaoLv @ZhennanQin @xinyu-intel please help to review the latest changes :)
pengzhao-intel commented on issue #17170: add mkldnn softmax backward
URL: https://github.com/apache/incubator-mxnet/pull/17170#issuecomment-570419924
@rongzha1 please retrigger the PR @TaoLv could you help review again?
onomatet opened a new issue #17207: LR schedulers do not work in R
URL: https://github.com/apache/incubator-mxnet/issues/17207
Neither custom nor pre-built learning rate schedulers have any effect on the
training process in R.
The training process is always the same regardless of the
ChaiBapchya commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362671686
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -88,19 +88,34 @@ def _new_alloc_handle(stype,
larroy commented on issue #17206: [DOC] Update documentation with automated
windows environment scripts
URL: https://github.com/apache/incubator-mxnet/pull/17206#issuecomment-570401835
@mxnet-label-bot add [pr-awaiting-review]
larroy opened a new pull request #17206: Update documentation with automated
windows environment scripts
URL: https://github.com/apache/incubator-mxnet/pull/17206
## Description ##
Update windows build from source documentation with automated scripts.
As part of fixing the
apeforest commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362670067
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -88,19 +88,34 @@ def _new_alloc_handle(stype,
apeforest commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362670021
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -88,19 +88,34 @@ def _new_alloc_handle(stype,
apeforest commented on issue #16898: Sparse int64 Large tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#issuecomment-570400783
Adding @reminisce to review the C API change for numpy operator.
This is
This is an automated email from the ASF dual-hosted git repository.
zachgk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from b38816d ONNX export: Gather (#15995)
add 77c7c3a Enhancements for custom subgraph op (#17194)
No
This is an automated email from the ASF dual-hosted git repository.
zachgk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from b38816d ONNX export: Gather (#15995)
add 77c7c3a Enhancements for custom subgraph op (#17194)
No
zachgk merged pull request #17194: Enhancements for custom subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/17194
This is an automated message from the Apache Git Service.
To respond to the message, please log
eric-haibin-lin closed pull request #17146: Fix sparse L2Norm's fallback
regression
URL: https://github.com/apache/incubator-mxnet/pull/17146
This is an automated message from the Apache Git Service.
To respond to the
eric-haibin-lin commented on issue #17146: Fix sparse L2Norm's fallback
regression
URL: https://github.com/apache/incubator-mxnet/pull/17146#issuecomment-570388445
Opened another PR with this fix that passes CI:
https://github.com/apache/incubator-mxnet/pull/17146
eric-haibin-lin commented on issue #17202: Backport PR 17149 to 1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/17202#issuecomment-570388600
@ptrendx could you rebase and merge this PR for 1.6.x
This is an automated
This is an automated email from the ASF dual-hosted git repository.
ptrendx pushed a commit to branch v1.6.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/v1.6.x by this push:
new c7d35db fix parameter names in the
ptrendx merged pull request #17162: Backport #17051 to 1.6
URL: https://github.com/apache/incubator-mxnet/pull/17162
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
ishaybas opened a new issue #17205: MXNetCAPI.so: undefined symbol:
MXListAllOpNames
URL: https://github.com/apache/incubator-mxnet/issues/17205
## Description
Trying to build MXNet for the first time (for me), and then building the
PERL binding.
All went fine with no errors, but
anirudhacharya commented on a change in pull request #14942: ONNX export: Slice
op - Handle None value for ends
URL: https://github.com/apache/incubator-mxnet/pull/14942#discussion_r362656297
##
File path: python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
##
@@
vandanavk commented on a change in pull request #14942: ONNX export: Slice op -
Handle None value for ends
URL: https://github.com/apache/incubator-mxnet/pull/14942#discussion_r362656572
##
File path: python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
##
@@ -499,15
anirudhacharya commented on a change in pull request #14942: ONNX export: Slice
op - Handle None value for ends
URL: https://github.com/apache/incubator-mxnet/pull/14942#discussion_r362656297
##
File path: python/mxnet/contrib/onnx/onnx2mx/_op_translations.py
##
@@
anirudhacharya merged pull request #15995: ONNX export: Gather
URL: https://github.com/apache/incubator-mxnet/pull/15995
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
This is an automated email from the ASF dual-hosted git repository.
anirudhacharya pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from b9bce35 boolean indexing (#17009)
add b38816d ONNX export: Gather (#15995)
No new revisions
vandanavk commented on a change in pull request #14942: ONNX export: Slice op -
Handle None value for ends
URL: https://github.com/apache/incubator-mxnet/pull/14942#discussion_r362652961
##
File path: python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
##
@@ -1499,17
ChaiBapchya commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362651776
##
File path: include/mxnet/c_api.h
##
@@ -643,14 +643,14 @@ MXNET_DLL int
samskalicky opened a new pull request #17204: [WIP] enhancements for MXTensor
for custom operators
URL: https://github.com/apache/incubator-mxnet/pull/17204
## Description ##
Enhancements to MXTensor for custom operators. Adds the following features:
- version IDs to uniquely
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 06aec8a [CI] Re-enable testing with numpy 1.18 (#17200)
add b9bce35 boolean indexing (#17009)
No
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 06aec8a [CI] Re-enable testing with numpy 1.18 (#17200)
add b9bce35 boolean indexing (#17009)
No
reminisce merged pull request #17009: [Numpy] support boolean indexing
URL: https://github.com/apache/incubator-mxnet/pull/17009
This is an automated message from the Apache Git Service.
To respond to the message, please log
ptrendx commented on a change in pull request #17049: Fix operators lying about
their number of inputs
URL: https://github.com/apache/incubator-mxnet/pull/17049#discussion_r362596428
##
File path: src/operator/nn/concat.cc
##
@@ -394,6 +394,14 @@ CONCAT_FORWARD_ATTRS
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 574e045 Bump the
apeforest commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362575884
##
File path: include/mxnet/c_api.h
##
@@ -643,14 +643,14 @@ MXNET_DLL int MXNDArrayCreateSparseEx(int
ptrendx commented on issue #17164: net.Cast("float16") doesn't work: Check
failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform
type. Expected 'float32' v.s. given 'float16' at 'gamma'
URL:
https://github.com/apache/incubator-mxnet/issues/17164#issuecomment-570295684
yzhliu opened a new pull request #17203: [tvmop] link libtvmop with
libtvm_runtime
URL: https://github.com/apache/incubator-mxnet/pull/17203
This PR links libtvmop with libtvm_runtime, so that symbols in
libtvm_runtime.so can be found by mxnet in order to run tvm operators. It
solves the
rohun-tripathi commented on issue #14357: [Bug] Batchnorm running_var behaves
differently when using gpu vs. cpu
URL:
https://github.com/apache/incubator-mxnet/issues/14357#issuecomment-570269685
Has this bug been fixed in the latest versions?
eric-haibin-lin opened a new pull request #17202: Backport PR 17149 to 1.6.x
URL: https://github.com/apache/incubator-mxnet/pull/17202
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 9cfb81b Bump the
leezu commented on issue #17086: [MKLDNN] RNN Op gradient computation is broken
URL:
https://github.com/apache/incubator-mxnet/issues/17086#issuecomment-570152289
@TaoLv I checkd and this issue affects 1.6. As it was recently decided to
distribute the MKL builds by default this fix must
This is an automated email from the ASF dual-hosted git repository.
lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ffeacfa Workaround for DLL size limitation on Windows (#16980)
add 06aec8a [CI] Re-enable testing
leezu commented on issue #17154:
test_numpy_interoperability.test_np_array_function_protocol broken with numpy
1.18
URL:
https://github.com/apache/incubator-mxnet/issues/17154#issuecomment-570148853
Thank you
This is an
leezu merged pull request #17200: [WIP] loosen numpy version requirement
URL: https://github.com/apache/incubator-mxnet/pull/17200
This is an automated message from the Apache Git Service.
To respond to the message, please
leezu closed issue #17154:
test_numpy_interoperability.test_np_array_function_protocol broken with numpy
1.18
URL: https://github.com/apache/incubator-mxnet/issues/17154
This is an automated message from the Apache Git
This is an automated email from the ASF dual-hosted git repository.
lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from e65fc4b [MKLDNN] Fix _copyto (#17173)
add ffeacfa Workaround for DLL size limitation on Windows
leezu merged pull request #16980: change windows build system
URL: https://github.com/apache/incubator-mxnet/pull/16980
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
deimsdeutsch commented on issue #17195: Unable to convert Insightface resnet
100 model to onnx
URL:
https://github.com/apache/incubator-mxnet/issues/17195#issuecomment-570144769
Faced similar issue 2 months and decided against it. Glad it is being taken
up.
TaoLv commented on issue #17086: [MKLDNN] RNN Op gradient computation is broken
URL:
https://github.com/apache/incubator-mxnet/issues/17086#issuecomment-570143616
@liuzh91 Nice to hear that and thank you for trying it out. Would you mind
approving #17183 if it looks good to you? Also
zixuanweeei commented on issue #17086: [MKLDNN] RNN Op gradient computation is
broken
URL:
https://github.com/apache/incubator-mxnet/issues/17086#issuecomment-570141151
> @zixuanweeei @TaoLv I can confirm the new patch works correctly on the
language model script. Thanks for the patch.
liuzh91 edited a comment on issue #17086: [MKLDNN] RNN Op gradient computation
is broken
URL:
https://github.com/apache/incubator-mxnet/issues/17086#issuecomment-570140890
@zixuanweeei @TaoLv I can confirm the new patch works correctly on the
language model script. Thanks for the patch.
liuzh91 commented on issue #17086: [MKLDNN] RNN Op gradient computation is
broken
URL:
https://github.com/apache/incubator-mxnet/issues/17086#issuecomment-570140890
@zixuanweeei @TaoLv I can confirm the new patch work correctly on the
language model script. Thanks for the patch.
ChaiBapchya commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r362393825
##
File path: python/mxnet/ndarray/sparse.py
##
@@ -41,7 +41,7 @@
from ..base import
IvyGongoogle opened a new issue #17201: mx.model.load_checkpoint can not get
the arg_params and aux_params with correct order
URL: https://github.com/apache/incubator-mxnet/issues/17201
Hello, I use `sym, arg_params, aux_params =
mx.model.load_checkpoint(modele_prefix, epoch)` to load a
72 matches
Mail list logo