hellonico opened a new pull request #13624: WIP: Nightly Tests For Clojure
URL: https://github.com/apache/incubator-mxnet/pull/13624
## Description ##
This is about running the inrtegration tests during nightly CI.
## Checklist ##
### Essentials ###
Please feel free to re
MyYaYa commented on issue #13001: Feature request: numpy.cumsum
URL:
https://github.com/apache/incubator-mxnet/issues/13001#issuecomment-446489801
Yeah, if ndarray supports cumsum operation, some custom metrics(i.e. mAP)
will benefit from that.
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 8f01f4b Bump the publish ti
chinakook commented on issue #8335: Performance of MXNet on Windows is lower
than that on Linux by 15%-20%
URL:
https://github.com/apache/incubator-mxnet/issues/8335#issuecomment-446478064
Thx, I will try later.
This is an
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240883314
##
File path: src/imperative/cached_op.cc
##
@@ -262,6 +262,29 @@ std::vector CachedOp::Grad
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240890932
##
File path: src/imperative/imperative_utils.cc
##
@@ -22,6 +22,114 @@
namespace mxnet {
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240889937
##
File path: src/imperative/cached_op.cc
##
@@ -834,6 +861,61 @@ OpStatePtr CachedOp::Dynam
zheng-da commented on a change in pull request #13419: [MXNET-1233] Enable
dynamic shape in CachedOp
URL: https://github.com/apache/incubator-mxnet/pull/13419#discussion_r240890307
##
File path: src/imperative/imperative_utils.cc
##
@@ -22,6 +22,114 @@
namespace mxnet {
TaoLv commented on a change in pull request #13150: support mkl log when dtype
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885436
##
File path: src/operator/tensor/elemwise_unary_op.h
##
@@ -348,6 +352,42 @@ class UnaryOp : publ
TaoLv commented on a change in pull request #13150: support mkl log when dtype
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885471
##
File path: src/operator/tensor/elemwise_unary_op.h
##
@@ -348,6 +352,42 @@ class UnaryOp : publ
TaoLv commented on a change in pull request #13150: support mkl log when dtype
is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#discussion_r240885737
##
File path: src/operator/tensor/elemwise_unary_op.h
##
@@ -34,6 +34,10 @@
#include "../mxnet_op
pengzhao-intel commented on a change in pull request #13599: fallback to dense
version for grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#discussion_r240881956
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -2
pengzhao-intel edited a comment on issue #13602: Fix for import mxnet taking
long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446459523
Based on 10560's
[comment](https://github.com/apache/incubator-mxnet/issues/10560#issuecommen
zheng-da commented on issue #13599: fallback to dense version for
grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#issuecomment-446459824
Otherwise, it looks good to me.
BTW, I don't think it has anything to do with sparse.
-
pengzhao-intel commented on issue #13602: Fix for import mxnet taking long time
if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446459523
Based on 10560's
[comment](https://github.com/apache/incubator-mxnet/issues/10560#issuecomment-38151
zheng-da commented on a change in pull request #13599: fallback to dense
version for grad(reshape), grad(expand_dims)
URL: https://github.com/apache/incubator-mxnet/pull/13599#discussion_r240880048
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -236,6 +
pengzhao-intel commented on issue #12203: flaky test:
test_operator_gpu.test_depthwise_convolution
URL:
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446457845
@juliusshufan could you help take a look for this test case?
--
StatML commented on issue #11868: nnpack_fully_connected-inl.h:45:55: error:
expected template-name before ‘<’ token > class NNPACKFullyConnectedOp :
public FullyConnectedOp { >
^
URL:
https://github.com/apache/incubator-mxnet/issues/11868
jiangzhengkai removed a comment on issue #13623: This convolution is not
supported by cudnn
URL:
https://github.com/apache/incubator-mxnet/issues/13623#issuecomment-446449640
Hi, i'm using 1.5.0 version mxnet. The log "This convolution is not
supported by cudnn" is really
uncomfortabl
jiangzhengkai commented on issue #13623: This convolution is not supported by
cudnn
URL:
https://github.com/apache/incubator-mxnet/issues/13623#issuecomment-446449640
Hi, i'm using 1.5.0 version mxnet. The log "This convolution is not
supported by cudnn" is really
uncomfortable. How t
jiangzhengkai opened a new issue #13623: This convolution is not supported by
cudnn
URL: https://github.com/apache/incubator-mxnet/issues/13623
Hi, i'm using 1.5.0 version mxnet. The log **"This** convolution is not
supported by cudnn" is really
uncomfortable. How to supress the **war
xinyu-intel opened a new pull request #13622: [WIP]Fix SSD accuracy variance
URL: https://github.com/apache/incubator-mxnet/pull/13622
## Description ##
This PR is to fix omp bug in `multibox_detection` to avoid accuracy variance
of SSD topology.
## Checklist ##
### Essentials
TaoLv commented on issue #13150: support mkl log when dtype is fp32 or fp64
URL: https://github.com/apache/incubator-mxnet/pull/13150#issuecomment-446430758
@XiaotaoChen since #13607 is merged, please rebase code and retrigger CI.
Make sure that unit test for log operator can correctly run
ifeherva commented on issue #3118: Gradient reversal layer without custom
operator
URL:
https://github.com/apache/incubator-mxnet/issues/3118#issuecomment-446429492
I implemented this as custom operator currently, however I think this would
be of interest to others. Would it make sense to
TaoLv commented on issue #12922: Support Quantized Fully Connected by INT8 GEMM
URL: https://github.com/apache/incubator-mxnet/pull/12922#issuecomment-446427791
Test case need be refined to make it can run into MKL BLAS.
This
This is an automated email from the ASF dual-hosted git repository.
nswamy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 9ce7eab Fix warning in waitall doc (#1
nswamy closed pull request #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull req
xinyu-intel commented on issue #12922: Support Quantized Fully Connected by
INT8 GEMM
URL: https://github.com/apache/incubator-mxnet/pull/12922#issuecomment-446425642
@lihaofd please rebase code and trigger MKL ci.
This is an
ciyongch commented on issue #13596: Fix quantize pass error when excluding a
quantization supported op
URL: https://github.com/apache/incubator-mxnet/pull/13596#issuecomment-446422951
@roywei No open issue related to it so far. We found this error when trying
to quantize Resnet50_v1 at loc
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new e97e209 Bump the publish ti
roywei commented on issue #13588: Accelerate DGL csr neighbor sampling
URL: https://github.com/apache/incubator-mxnet/pull/13588#issuecomment-446418476
@mxnet-label-bot add[Operator, pr-awaiting-review]
This is an automated me
anirudh2290 commented on issue #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618#issuecomment-446418635
Preview here:
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-13618/1/api/python/ndarray/ndarray.html?highlight=waitall#mxnet.ndarr
lanking520 opened a new pull request #13621: [MXNET-1251] Basic configuration
to do static-linking
URL: https://github.com/apache/incubator-mxnet/pull/13621
## Description ##
For Ubuntu 14.04 base build to install all dependencies.
@szha @zachgk
## Checklist ##
### Essentia
roywei commented on issue #13588: Accelerate DGL csr neighbor sampling
URL: https://github.com/apache/incubator-mxnet/pull/13588#issuecomment-446418341
@BullDemonKing Thanks for the contribution! could you take a look at failed
tests?
@zheng-da @eric-haibin-lin could you take a look?
roywei commented on issue #13590: fix Makefile for rpkg
URL: https://github.com/apache/incubator-mxnet/pull/13590#issuecomment-446418055
@mxnet-label-bot [R, pr-awaiting-review]
This is an automated message from the Apache Git
roywei commented on issue #13590: fix Makefile for rpkg
URL: https://github.com/apache/incubator-mxnet/pull/13590#issuecomment-446417917
@jeremiedb Thanks for the contribution, could you take a look at the unit
test failed?
ping @anirudhacharya @ankkhedia for review
roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in
layers
URL: https://github.com/apache/incubator-mxnet/pull/13591#issuecomment-446417396
@mxnet-label-bot add[Operator, pr-awaiting-review]
This is
roywei commented on issue #13591: Add a DGL operator to compute vertex Ids in
layers
URL: https://github.com/apache/incubator-mxnet/pull/13591#issuecomment-446417351
@BullDemonKing Thanks for the contribution. @zheng-da @apeforest could you
take a look?
---
roywei commented on issue #13596: Fix quantize pass error when excluding a
quantization supported op
URL: https://github.com/apache/incubator-mxnet/pull/13596#issuecomment-446417029
@mxnet-label-bot add [Operator, pr-awaiting-review]
@ciyongch Thanks for the contribution, is there an iss
roywei commented on issue #13597: [MXNET-1255] update hybridize documentation
URL: https://github.com/apache/incubator-mxnet/pull/13597#issuecomment-446416327
@mxnet-label-bot add[Doc, pr-awaiting-review]
This is an automated
zachgk commented on a change in pull request #13619: [MXNET-1231] Allow not
using Some in the Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619#discussion_r240843382
##
File path:
scala-package/core/src/test/scala/org/apache/mxnet/NDArraySuite.scala
###
roywei commented on issue #13602: Fix for import mxnet taking long time if
multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#issuecomment-446416195
@mxnet-label-bot add[Environment Variables, Operator]
--
lanking520 opened a new pull request #13620: [WIP] add examples and fix the
dependency problem
URL: https://github.com/apache/incubator-mxnet/pull/13620
## Description ##
Add a use case in the java demo explaining the usage of ParamObject
@andrewfayres @zachgk @piyushghai @nswamy
roywei commented on issue #13604: [WIP] onnx broadcast ops fixes
URL: https://github.com/apache/incubator-mxnet/pull/13604#issuecomment-446415455
@Roshrini Thanks for the contribution, seems one of the onnx unit test
failed.
@vandanavk for review
roywei commented on issue #13604: [WIP] onnx broadcast ops fixes
URL: https://github.com/apache/incubator-mxnet/pull/13604#issuecomment-446415242
@mxnet-label-bot add[ONNX, pr-awaiting-review]
This is an automated message from
roywei commented on issue #13606: Complimentary gluon DataLoader improvements
URL: https://github.com/apache/incubator-mxnet/pull/13606#issuecomment-446415025
@mxnet-label-bot add[Data-loading, pr-awaiting-review]
This is an a
roywei commented on issue #13609: [MXNET-1258]fix unittest for ROIAlign Operator
URL: https://github.com/apache/incubator-mxnet/pull/13609#issuecomment-446413342
@mxnet-label-bot add[CI, Flaky, pr-awaiting-review]
This is an a
roywei commented on issue #13611: add image resize operator and unit test
URL: https://github.com/apache/incubator-mxnet/pull/13611#issuecomment-446412852
@mxnet-label-bot add[Operator, pr-awaiting-review]
This is an automated
roywei commented on issue #13612: add pos_weight for
SigmoidBinaryCrossEntropyLoss
URL: https://github.com/apache/incubator-mxnet/pull/13612#issuecomment-446412567
@mxnet-label-bot add[Gluon, pr-awaiting-review]
This is an au
roywei commented on issue #13612: add pos_weight for
SigmoidBinaryCrossEntropyLoss
URL: https://github.com/apache/incubator-mxnet/pull/13612#issuecomment-446412389
@eureka7mt Thanks for the contribution, could you add a unit test for this
case?
roywei commented on issue #13614: Make to_tensor and normalize to accept 3D or
4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#issuecomment-446411763
@mxnet-label-bot add[Gluon, Data-loading, Operator]
-
azai91 commented on issue #13084: Test/mkldnn batch norm op
URL: https://github.com/apache/incubator-mxnet/pull/13084#issuecomment-446411222
@Vikas89 added
This is an automated message from the Apache Git Service.
To respond t
roywei commented on issue #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618#issuecomment-446409938
@mxnet-label-bot add[Website, pr-awaiting-review]
This is an automated message from
roywei commented on issue #13619: [MXNET-1231] Allow not using Some in the
Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619#issuecomment-446409294
@lanking520 Thanks for the contribution! any documentation on how to use
SomeConversion?
@mxnet-label-bot add[Scala
vandanavk commented on issue #13356: ONNX export: Add Flatten before Gemm
URL: https://github.com/apache/incubator-mxnet/pull/13356#issuecomment-446403176
@zhreshold for review
This is an automated message from the Apache Git
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240829402
##
File path: src/operator/operator_tune-inl.h
##
@@ -56,7 +56,7 @@
lanking520 closed pull request #13364: [MXNET-1225] Always use config.mk in
make install instructions
URL: https://github.com/apache/incubator-mxnet/pull/13364
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of p
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 97e0c97 [MXNET-1225] Always use confi
lanking520 closed pull request #13493: [MXNET-1224]: improve scala maven jni
build.
URL: https://github.com/apache/incubator-mxnet/pull/13493
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As thi
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240818689
##
File path: docs/faq/env_var.md
##
@@ -226,12 +226,11 @@ Settings
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new b242b0c [MXNET-1224]: improve scala m
anirudh2290 commented on a change in pull request #13602: Fix for import mxnet
taking long time if multiple process launched
URL: https://github.com/apache/incubator-mxnet/pull/13602#discussion_r240829402
##
File path: src/operator/operator_tune-inl.h
##
@@ -56,7 +56,7 @@
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new a4c97ec [MXNET-1155] Add scala packag
lanking520 closed pull request #13046: [MXNET-1155] Add scala packageTest
utility
URL: https://github.com/apache/incubator-mxnet/pull/13046
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this
ChaiBapchya edited a comment on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446395402
It supports int32 and int64 (default to int32)
This is an automated messa
lanking520 closed pull request #13617: [MXNET-1257] fix the Float not showing
correctly problem
URL: https://github.com/apache/incubator-mxnet/pull/13617
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provena
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 1f8bb26 fix the Float not showing cor
lanking520 opened a new pull request #13619: [MXNET-1231] Allow not using Some
in the Scala operators
URL: https://github.com/apache/incubator-mxnet/pull/13619
## Description ##
Adding a new Util called SomeConversion. Import that would help to reduce
all Some usages.
@nswamy @yzhliu
Vikas89 commented on issue #13438: libc getenv is not threadsafe
URL:
https://github.com/apache/incubator-mxnet/issues/13438#issuecomment-446396227
@anirudh2290 good one!
I think this is problem in general. For this particular case we can try to
use fork handlers:
pthread_at_fork(pre
ChaiBapchya commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446395402
It supports int (default to int32)
This is an automated message from the Apache
lanking520 commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446394609
@ChaiBapchya what is the data type of `low` and `high`? It seemed not
intepreted correctly in Scala?
@mdespriee please try to pull --rebase ups
ChaiBapchya commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446393785
Operator definition
https://github.com/apache/incubator-mxnet/blob/449e17dbf2ec671037d4b127a28897b157f80bf3/src/operator/random/sample_op.h#L
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240820503
##
File path: src/operator/image/image_random-inl.h
##
@@ -62,6 +67,23 @@ inl
sandeep-krishnamurthy commented on a change in pull request #13614: Make
to_tensor and normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240819963
##
File path: src/operator/image/image_random-inl.h
##
@@ -123,
sandeep-krishnamurthy commented on a change in pull request #13614: Make
to_tensor and normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240819407
##
File path: src/operator/image/image_random-inl.h
##
@@ -62,6
marcoabreu commented on issue #13598: More fine-grained operator implementation
dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446384192
Just had a small chat with Haibin. So just to clarify, my idea would be
rather long-term to a
This is an automated email from the ASF dual-hosted git repository.
cmeier pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 449e17d #13385 [Clojure] - Turn examp
gigasquid commented on issue #13554: #13385 [Clojure] - Turn examples into
integration tests
URL: https://github.com/apache/incubator-mxnet/pull/13554#issuecomment-446383710
Thanks for taking this on and making it happen 💯
I'm going to go ahead a merge this and then we can work on a
gigasquid closed pull request #13554: #13385 [Clojure] - Turn examples into
integration tests
URL: https://github.com/apache/incubator-mxnet/pull/13554
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenanc
zachgk commented on a change in pull request #13364: [MXNET-1225] Always use
config.mk in make install instructions
URL: https://github.com/apache/incubator-mxnet/pull/13364#discussion_r240802315
##
File path: docs/install/build_from_source.md
##
@@ -203,68 +204,82 @@ It i
piyushghai commented on a change in pull request #13046: [MXNET-1155] Add scala
packageTest utility
URL: https://github.com/apache/incubator-mxnet/pull/13046#discussion_r240801610
##
File path: scala-package/packageTest/README.md
##
@@ -0,0 +1,72 @@
+# MXNet Scala Package
zachgk commented on a change in pull request #13046: [MXNET-1155] Add scala
packageTest utility
URL: https://github.com/apache/incubator-mxnet/pull/13046#discussion_r240801313
##
File path: scala-package/packageTest/README.md
##
@@ -0,0 +1,72 @@
+# MXNet Scala Package Test
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240792875
##
File path: src/operator/image/image_random-inl.h
##
@@ -123,28 +159,50 @@
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240795206
##
File path: src/operator/image/image_random-inl.h
##
@@ -123,28 +159,50 @@
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240800204
##
File path: tests/python/unittest/test_gluon_data_vision.py
##
@@ -19,30 +1
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240795082
##
File path: src/operator/image/image_random-inl.h
##
@@ -123,28 +159,50 @@
stu1130 commented on a change in pull request #13614: Make to_tensor and
normalize to accept 3D or 4D tensor inputs
URL: https://github.com/apache/incubator-mxnet/pull/13614#discussion_r240793402
##
File path: src/operator/image/image_random-inl.h
##
@@ -62,6 +67,23 @@ inl
mseth10 edited a comment on issue #12203: flaky test:
test_operator_gpu.test_depthwise_convolution
URL:
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446366930
Reproduction steps (from
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results):
anirudh2290 opened a new pull request #13618: Fix warning in waitall doc
URL: https://github.com/apache/incubator-mxnet/pull/13618
## Description ##
Fixing waitall rendering:
https://mxnet.incubator.apache.org/api/python/ndarray/ndarray.html?highlight=waitall#mxnet.ndarray.waitall
mseth10 commented on issue #12203: flaky test:
test_operator_gpu.test_depthwise_convolution
URL:
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446366930
Reproduction steps (from
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results):
Spin
marcoabreu edited a comment on issue #13598: More fine-grained operator
implementation dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
Thanks for your very good questions!
For the operator selection I would think a
stu1130 commented on a change in pull request #13611: add image resize operator
and unit test
URL: https://github.com/apache/incubator-mxnet/pull/13611#discussion_r240787276
##
File path: src/operator/image/image_random-inl.h
##
@@ -147,6 +150,140 @@ void Normalize(const n
marcoabreu edited a comment on issue #13598: More fine-grained operator
implementation dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
Thanks for your very good questions!
For the operator selection I would think a
marcoabreu edited a comment on issue #13598: More fine-grained operator
implementation dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
Thanks for your very good questions!
For the operator selection I would think a
marcoabreu commented on issue #13598: More fine-grained operator implementation
dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446353433
Thanks for your very good questions!
For the operator selection I would think about a
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new ed2ac32 Bump the publish ti
mseth10 edited a comment on issue #12203: flaky test:
test_operator_gpu.test_depthwise_convolution
URL:
https://github.com/apache/incubator-mxnet/issues/12203#issuecomment-446342867
This flaky test issue has previously been identified
(https://github.com/apache/incubator-mxnet/issues/8712
samskalicky commented on issue #13039: [MXNET-918] Random module
URL: https://github.com/apache/incubator-mxnet/pull/13039#issuecomment-446345931
@ChaiBapchya wrote the randint operator
This is an automated message from the Ap
eric-haibin-lin commented on issue #13598: More fine-grained operator
implementation dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446343940
@marcoabreu thanks for the comments. True that the existing infer_storage
interface and
eric-haibin-lin edited a comment on issue #13598: More fine-grained operator
implementation dispatch & memory planning flow
URL:
https://github.com/apache/incubator-mxnet/issues/13598#issuecomment-446343940
@marcoabreu thanks for the comments. True that the existing infer_storage
interfa
1 - 100 of 173 matches
Mail list logo