junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262826071
##
File path: include/mxnet/tuple.h
##
@@ -316,48 +335,74 @@ class Tuple {
protected:
//
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262825842
##
File path: include/mxnet/tuple.h
##
@@ -316,48 +335,74 @@ class Tuple {
protected:
//
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262819311
##
File path: include/mxnet/tuple.h
##
@@ -404,9 +453,11 @@ class TShape : public Tuple {
}
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262819232
##
File path: include/mxnet/tuple.h
##
@@ -316,48 +335,74 @@ class Tuple {
protected:
//
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new b486594 Register fake grad to
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new e3799e2 Bump the publish
reminisce merged pull request #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275
This is an automated message from the Apache Git Service.
To respond to
szha merged pull request #13816: Add default parameters for Scala
NDArray.arange
URL: https://github.com/apache/incubator-mxnet/pull/13816
This is an automated message from the Apache Git Service.
To respond to the
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a0f3f92 Add default parameters for
szha commented on a change in pull request #13907: Fixes downloading of data in
cpp-package/example/get_data.sh
URL: https://github.com/apache/incubator-mxnet/pull/13907#discussion_r262815070
##
File path: cpp-package/example/get_data.sh
##
@@ -14,28 +15,29 @@
# KIND,
junrushao1994 edited a comment on issue #14342: Ensure all usage of `ndim` to
be 0-d tensor compatible
URL:
https://github.com/apache/incubator-mxnet/issues/14342#issuecomment-469992767
@reminisce Yep. I intentionally ignored `src/operator` in this list because
there are too many
junrushao1994 commented on issue #14342: Ensure all usage of `ndim` to be 0-d
tensor compatible
URL:
https://github.com/apache/incubator-mxnet/issues/14342#issuecomment-469992767
@reminisce Yep. I intentionally ignored `src/operator` in this list because
there are too many occurrences in
reminisce commented on issue #14342: Ensure all usage of `ndim` to be 0-d
tensor compatible
URL:
https://github.com/apache/incubator-mxnet/issues/14342#issuecomment-469992031
Also include all the infer shape functions of operators.
https://github.com/apache/incubator-mxnet/issues/14323
szha merged pull request #14222: Add more support for mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222
This is an automated message from the Apache Git Service.
To respond to the message, please log
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new fccce20 Add more support for
szha merged pull request #14258: fix render issue in NDArray linalg docs
URL: https://github.com/apache/incubator-mxnet/pull/14258
This is an automated message from the Apache Git Service.
To respond to the message, please
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 184c2a5 fix render issue in NDArray
szha merged pull request #14303: [MXNET-1331] Removal of non-MXNET classes from
JAR
URL: https://github.com/apache/incubator-mxnet/pull/14303
This is an automated message from the Apache Git Service.
To respond to the
iblis17 commented on issue #13992: Julia: add binding for runtime feature
detection
URL: https://github.com/apache/incubator-mxnet/pull/13992#issuecomment-469985176
sure
This is an automated message from the Apache Git
This is an automated email from the ASF dual-hosted git repository.
iblis pushed a change to branch ib/jl-runtime-features
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
discard 6d9e429 update
discard 271fed6 mx.isenabled
discard 03698eb 2 space
discard ec80b31
junrushao1994 opened a new issue #14342: Ensure all usage of `ndim` to be 0-d
tensor compatible
URL: https://github.com/apache/incubator-mxnet/issues/14342
After #14315, we should check all the occurrence of `ndim` to ensure they
are used correctly in the semantics of 0-d tensor.
mxnet-label-bot commented on issue #14342: Ensure all usage of `ndim` to be 0-d
tensor compatible
URL:
https://github.com/apache/incubator-mxnet/issues/14342#issuecomment-469984212
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some
szha commented on issue #13992: Julia: add binding for runtime feature detection
URL: https://github.com/apache/incubator-mxnet/pull/13992#issuecomment-469984071
could you try another rebase? CI had a hiccup a couple of days ago.
iblis17 commented on issue #13992: Julia: add binding for runtime feature
detection
URL: https://github.com/apache/incubator-mxnet/pull/13992#issuecomment-469982742
well, the ci status blocks the merge button.
This is an
arcadiaphy commented on issue #14058: add backgroud class in box_nms
URL: https://github.com/apache/incubator-mxnet/pull/14058#issuecomment-469982602
@wkcn CI seems OK now, ready for merge.
This is an automated message from
anirudh2290 commented on a change in pull request #14275: Register fake grad to
subgraph and quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#discussion_r262802791
##
File path: src/executor/graph_executor.cc
##
@@ -1506,8 +1506,26 @@ static
arcadiaphy commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-469981558
@DickJC123 OK. Let me find out the problem behind the threaded engine crash.
yuxihu commented on a change in pull request #14339: Add MKLDNN headers to pip
package
URL: https://github.com/apache/incubator-mxnet/pull/14339#discussion_r262797531
##
File path: tools/pip/setup.py
##
@@ -97,6 +97,8 @@ def has_ext_modules(self):
szha commented on issue #14053: in-place reshape ops
URL: https://github.com/apache/incubator-mxnet/pull/14053#issuecomment-469971058
Nonetheless it should not throw an error when using inplace with hybridize.
Let me see how to best deal with it.
szha commented on issue #14053: in-place reshape ops
URL: https://github.com/apache/incubator-mxnet/pull/14053#issuecomment-469970660
@wkcn good point on symbol and HybridBlock. I'm not sure if a warning should
be added as it can get very verbose.
szha commented on issue #14315: [numpy] Shape support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#issuecomment-469969704
Thanks for addressing my concerns
This is an automated message from the
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262791025
##
File path: include/mxnet/tuple.h
##
@@ -220,6 +235,10 @@ class Tuple {
* \return the ostream
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 111b881 Limit workspace for
szha merged pull request #14326: Limit workspace for cudnnGet results
URL: https://github.com/apache/incubator-mxnet/pull/14326
This is an automated message from the Apache Git Service.
To respond to the message, please log
stereomatchingkiss edited a comment on issue #11769: USE_BLAS=MKL fails due to
mshadow requiring openblas
URL:
https://github.com/apache/incubator-mxnet/issues/11769#issuecomment-469963504
os : win10 64bits
compiler : vc2015 64 bits update 3(this version cannot build with openBLAS,
stereomatchingkiss edited a comment on issue #11769: USE_BLAS=MKL fails due to
mshadow requiring openblas
URL:
https://github.com/apache/incubator-mxnet/issues/11769#issuecomment-469963504
os : win10 64bits
compiler : vc2015 64 bits update 3(this version cannot build with openBLAS,
stereomatchingkiss commented on issue #11769: USE_BLAS=MKL fails due to mshadow
requiring openblas
URL:
https://github.com/apache/incubator-mxnet/issues/11769#issuecomment-469963504
os : win10 64bits
compiler : vc2015 64 bits update 3(this version cannot build with openBLAS,
lots of
szha commented on a change in pull request #14339: Add MKLDNN headers to pip
package
URL: https://github.com/apache/incubator-mxnet/pull/14339#discussion_r262787656
##
File path: tools/pip/setup.py
##
@@ -97,6 +97,8 @@ def has_ext_modules(self):
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262787416
##
File path: include/mxnet/tuple.h
##
@@ -220,6 +235,10 @@ class Tuple {
* \return the ostream
wkcn commented on issue #14258: fix render issue in NDArray linalg docs
URL: https://github.com/apache/incubator-mxnet/pull/14258#issuecomment-469959859
@aaronmarkham Hi, I have removed the whitespace.
This is an automated
wkcn commented on issue #14338: Bypass ThreadedEngine in
test_convolution_multiple_streams.
URL: https://github.com/apache/incubator-mxnet/pull/14338#issuecomment-469954317
Merged. Thank you!
This is an automated message
This is an automated email from the ASF dual-hosted git repository.
wkcn pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new d6eafca Bypass ThreadedEngine in
wkcn merged pull request #14338: Bypass ThreadedEngine in
test_convolution_multiple_streams.
URL: https://github.com/apache/incubator-mxnet/pull/14338
This is an automated message from the Apache Git Service.
To respond to
sleepwalker2017 opened a new issue #14341: Is the mxnet-tensorrt integration
available in C++?
URL: https://github.com/apache/incubator-mxnet/issues/14341
I see the doc here:
mxnet-label-bot commented on issue #14341: Is the mxnet-tensorrt integration
available in C++?
URL:
https://github.com/apache/incubator-mxnet/issues/14341#issuecomment-469952528
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels
DickJC123 commented on issue #14338: Bypass ThreadedEngine in
test_convolution_multiple_streams.
URL: https://github.com/apache/incubator-mxnet/pull/14338#issuecomment-469948098
@szha @eric-haibin-lin This is ready for merging IMHO.
ThomasDelteil commented on issue #14340: [bug] Bug in Gradient flow with
backward(retain_graph=True) and split()
URL:
https://github.com/apache/incubator-mxnet/issues/14340#issuecomment-469934034
replacing
```
z1, z2 = F.split(y, 2)
out1 = self.classifier(z1)
TaoLv commented on issue #14128: MKLDNN based Quantized FullyConnected Operator
and its fusion
URL: https://github.com/apache/incubator-mxnet/pull/14128#issuecomment-469933071
@szha please confirm your concerns are fully addressed. Thanks.
mxnet-label-bot commented on issue #14340: [bug] Bug in Gradient flow with
backward(retain_graph=True) and split()
URL:
https://github.com/apache/incubator-mxnet/issues/14340#issuecomment-469932380
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and
ThomasDelteil opened a new issue #14340: [bug] Bug in Gradient flow with
backward(retain_graph=True) and split()
URL: https://github.com/apache/incubator-mxnet/issues/14340
See below code.
I get 4 different values for my gradient, when I should be getting the same
(first one)
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 49d7fc6 Enhance gpu quantization
TaoLv closed issue #14092: Improve message when quantized_dtype uint8 is used
with gpu context
URL: https://github.com/apache/incubator-mxnet/issues/14092
This is an automated message from the Apache Git Service.
To respond
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new f2497aa Updated news.md with the
TaoLv merged pull request #14094: Enhance gpu quantization
URL: https://github.com/apache/incubator-mxnet/pull/14094
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
TaoLv merged pull request #14298: Updated news.md with the latest mkldnn
submodule version
URL: https://github.com/apache/incubator-mxnet/pull/14298
This is an automated message from the Apache Git Service.
To respond to
TaoLv commented on issue #14298: Updated news.md with the latest mkldnn
submodule version
URL: https://github.com/apache/incubator-mxnet/pull/14298#issuecomment-469931505
Thank you. Merging now.
This is an automated message
jiangzhengkai closed issue #13829: Mxnet Training deterministic
URL: https://github.com/apache/incubator-mxnet/issues/13829
This is an automated message from the Apache Git Service.
To respond to the message, please log on
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 39b8e23 Bump the publish
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262753902
##
File path: include/mxnet/tuple.h
##
@@ -404,9 +452,11 @@ class TShape : public Tuple {
}
yuxihu commented on issue #14339: Add MKLDNN headers to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14339#issuecomment-469921549
@lanking520 @apeforest @szha please help review.
@mxnet-label-bot update [pr-awaiting-review]
yuxihu opened a new pull request #14339: Add MKLDNN headers to pip package
URL: https://github.com/apache/incubator-mxnet/pull/14339
Following [14300](https://github.com/apache/incubator-mxnet/pull/14300), to
support running Horovod with MKLDNN enabled MXNet pip packages, we also need to
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch numpy
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 992c3c0 [MXNET-1330] Bring nnvm::Tuple to mxnet::Tuple (#14270)
add c6b1fd5 MXNET-1302 Exclude
apeforest commented on a change in pull request #14031: Fix transposed
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r262752207
##
File path: src/operator/nn/deconvolution-inl.h
##
@@ -251,76 +254,57 @@ class
apeforest commented on a change in pull request #14031: Fix transposed
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r262752392
##
File path: src/operator/nn/deconvolution-inl.h
##
@@ -373,55 +357,46 @@ class
apeforest commented on a change in pull request #14031: Fix transposed
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r262752137
##
File path: src/operator/nn/deconvolution-inl.h
##
@@ -485,7 +455,6 @@ class
apeforest commented on a change in pull request #14031: Fix transposed
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r262752207
##
File path: src/operator/nn/deconvolution-inl.h
##
@@ -251,76 +254,57 @@ class
apeforest commented on a change in pull request #14031: Fix transposed
convolution in CPU w/o MKLDNN.
URL: https://github.com/apache/incubator-mxnet/pull/14031#discussion_r262752137
##
File path: src/operator/nn/deconvolution-inl.h
##
@@ -485,7 +455,6 @@ class
vandanavk commented on issue #14338: Bypass ThreadedEngine in
test_convolution_multiple_streams.
URL: https://github.com/apache/incubator-mxnet/pull/14338#issuecomment-469914684
@mxnet-label-bot add [Flaky, Test, pr-awaiting-review]
DickJC123 commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-469914392
Rather than open up a new issue to track any follow-up PR from @arcadiaphy
I thought it best
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262744359
##
File path: include/mxnet/tuple.h
##
@@ -404,9 +452,11 @@ class TShape : public Tuple {
}
/*!
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262744066
##
File path: include/mxnet/tuple.h
##
@@ -404,9 +452,11 @@ class TShape : public Tuple {
}
/*!
DickJC123 opened a new pull request #14338: Bypass ThreadedEngine in
test_convolution_multiple_streams.
URL: https://github.com/apache/incubator-mxnet/pull/14338
## Description ##
Per discussion with @szha, this PR stabilizes the CI by eliminating errors
in
wkcn commented on issue #14313: compatibility with opencv4
URL: https://github.com/apache/incubator-mxnet/pull/14313#issuecomment-469907275
@szha I see. I used wrong linking order. I will fix it. Thank you!
This is an
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262739057
##
File path: include/mxnet/tensor_blob.h
##
@@ -196,6 +196,10 @@ class TBlob {
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262738706
##
File path: include/mxnet/tensor_blob.h
##
@@ -196,6 +196,10 @@ class TBlob {
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262738252
##
File path: include/mxnet/tuple.h
##
@@ -220,6 +237,7 @@ class Tuple {
* \return the ostream
piyushghai commented on issue #10883: make err on RK3399
URL:
https://github.com/apache/incubator-mxnet/issues/10883#issuecomment-469903767
@lovehuanhuan Did @kalyc 's suggestions help you out ?
If yes, please feel free to close the issue.
wkcn commented on issue #14327: [numpy] Implement NumPy operators
URL:
https://github.com/apache/incubator-mxnet/issues/14327#issuecomment-469902619
@reminisce I see. Thank you!
This is an automated message from the Apache
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262735420
##
File path: include/mxnet/tuple.h
##
@@ -106,7 +113,10 @@ class Tuple {
inline void
reminisce commented on a change in pull request #14315: [numpy] Shape support
scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262735160
##
File path: include/mxnet/tensor_blob.h
##
@@ -196,6 +196,10 @@ class TBlob {
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 427b6d4 Fix shape inference pass
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new d754da3 Relaxing type requirements
eric-haibin-lin merged pull request #14325: Relaxing type requirements for
reshape_like op
URL: https://github.com/apache/incubator-mxnet/pull/14325
This is an automated message from the Apache Git Service.
To respond to
eric-haibin-lin merged pull request #14153: Fix shape inference pass
URL: https://github.com/apache/incubator-mxnet/pull/14153
This is an automated message from the Apache Git Service.
To respond to the message, please log
aaronmarkham commented on issue #14258: fix render issue in NDArray linalg docs
URL: https://github.com/apache/incubator-mxnet/pull/14258#issuecomment-469898823
I probably introduced whitespace lint issues trying to nudge CI.
piyushghai commented on issue #11654: CI Problem: Build status not reflected on
PR
URL:
https://github.com/apache/incubator-mxnet/issues/11654#issuecomment-469896999
@jlcontreras Were you able to push a fix on the Jenkinsci repo for this ?
Please feel free to close the issue if
piyushghai commented on issue #13798: C++ generate wrong prediction results on
GPU
URL:
https://github.com/apache/incubator-mxnet/issues/13798#issuecomment-469895236
@jacksonxliu Did @leleamol 's suggestions help you out ?
If yes, please feel free to close the issue.
szha commented on a change in pull request #14315: [numpy] Shape support scalar
tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262728226
##
File path: include/mxnet/tensor_blob.h
##
@@ -196,6 +196,10 @@ class TBlob {
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262726227
##
File path: src/operator/operator_common.h
##
@@ -159,16 +169,16 @@ inline std::string
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724806
##
File path: include/mxnet/tuple.h
##
@@ -220,6 +237,7 @@ class Tuple {
* \return the
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724503
##
File path: include/mxnet/tuple.h
##
@@ -186,6 +201,7 @@ class Tuple {
* \return the
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724630
##
File path: include/mxnet/tuple.h
##
@@ -186,6 +201,7 @@ class Tuple {
* \return the
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724652
##
File path: include/mxnet/tuple.h
##
@@ -194,6 +210,7 @@ class Tuple {
* \return the
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724040
##
File path: include/mxnet/tuple.h
##
@@ -106,7 +113,10 @@ class Tuple {
inline void
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724040
##
File path: include/mxnet/tuple.h
##
@@ -106,7 +113,10 @@ class Tuple {
inline void
junrushao1994 commented on a change in pull request #14315: [numpy] Shape
support scalar tensor
URL: https://github.com/apache/incubator-mxnet/pull/14315#discussion_r262724040
##
File path: include/mxnet/tuple.h
##
@@ -106,7 +113,10 @@ class Tuple {
inline void
szha commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-469886673
@DickJC123 this plan sounds good. Thanks.
junrushao1994 commented on issue #14244: Feature request: share memory between
numpy.array and mxnet.ndarray
URL:
https://github.com/apache/incubator-mxnet/issues/14244#issuecomment-469886372
The point is that we need transfer/share ownership of numpy's ndarray to
mxnet's C++ backend,
DickJC123 commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-469885039
To debug this further, I checked out the ASAN PR commit, reverted my
dual-stream PR, then
DickJC123 commented on issue #14310: Cudnn conv dgrad algo filtering
URL: https://github.com/apache/incubator-mxnet/pull/14310#issuecomment-469868569
This PR is in a good state to review. I hope you guys like the
test-driven-development ;-)
@szha @eric-haibin-lin @marcoabreu
1 - 100 of 163 matches
Mail list logo