rongzha1 opened a new pull request #16222: [mkldnn-v1.0] Add MKL-DNN
softmax_output
URL: https://github.com/apache/incubator-mxnet/pull/16222
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplic
ZhennanQin commented on a change in pull request #16025: Numpy add numpy op
left_shift and right_shift
URL: https://github.com/apache/incubator-mxnet/pull/16025#discussion_r326488169
##
File path: src/operator/numpy/np_elemwise_broadcast_op.cc
##
@@ -144,5 +165,21 @@
MXNE
ElaineBao commented on issue #16199: [mkldnn-v1.0] Add MKL-DNN BN
URL: https://github.com/apache/incubator-mxnet/pull/16199#issuecomment-533419572
LGTM
This is an automated message from the Apache Git Service.
To respond to th
rongzha1 commented on a change in pull request #16199: [mkldnn-v1.0] Add
MKL-DNN BN
URL: https://github.com/apache/incubator-mxnet/pull/16199#discussion_r326479771
##
File path: src/operator/nn/mkldnn/mkldnn_batch_norm-inl.h
##
@@ -241,25 +176,30 @@ void MKLDNNBatchNormFor
rongzha1 commented on a change in pull request #16199: [mkldnn-v1.0] Add
MKL-DNN BN
URL: https://github.com/apache/incubator-mxnet/pull/16199#discussion_r326479835
##
File path: src/operator/nn/mkldnn/mkldnn_batch_norm-inl.h
##
@@ -241,25 +176,30 @@ void MKLDNNBatchNormFor
rongzha1 opened a new pull request #16221: [mkldnn-v1.0] Add MKL-DNN FC
URL: https://github.com/apache/incubator-mxnet/pull/16221
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items fo
rongzha1 commented on issue #16213: [mkldnn-v1.0][Don't merge] Trigger CI after
merging the master branch
URL: https://github.com/apache/incubator-mxnet/pull/16213#issuecomment-533416220
> @rongzha1 please rebase your ACT and BN PR.
rebase done
--
apeforest commented on issue #16218: Improving performance of argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#issuecomment-533410784
It seems this PR is working on the same operator as
https://github.com/apache/incubator-mxnet/pull/16178. Can you run a profiling
u
apeforest commented on a change in pull request #16218: Improving performance
of argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326474610
##
File path: src/operator/tensor/broadcast_reduce_op.h
##
@@ -556,6 +556,162 @@ inline bool Red
This is an automated email from the ASF dual-hosted git repository.
thomasdelteil pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a783d81 add epsilon to sum(pvalue) upperbound (#16211)
add 7126438 New Website: New Pipeline [
ThomasDelteil merged pull request #15883: New Website: New Pipeline [3/3]
URL: https://github.com/apache/incubator-mxnet/pull/15883
This is an automated message from the Apache Git Service.
To respond to the message, please l
ThomasDelteil commented on issue #15883: New Website: New Pipeline [3/3]
URL: https://github.com/apache/incubator-mxnet/pull/15883#issuecomment-533409036
CI passing, merging. If any issue ping @ThomasDelteil, @aaronmarkham, @sad-
pengzhao-intel commented on issue #16213: [mkldnn-v1.0][Don't merge] Trigger CI
after merging the master branch
URL: https://github.com/apache/incubator-mxnet/pull/16213#issuecomment-533408390
@rongzha1 please rebase your ACT and BN PR.
-
pengzhao-intel commented on issue #16213: [mkldnn-v1.0][Don't merge] Trigger CI
after merging the master branch
URL: https://github.com/apache/incubator-mxnet/pull/16213#issuecomment-533407882
@TaoLv CI passed, please go ahead :)
apeforest commented on a change in pull request #16215: New ops for RCNN + old
ops improvements for RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215#discussion_r326471563
##
File path: src/operator/contrib/bounding_box-inl.h
##
@@ -787,6 +787,284 @@ void Bipa
apeforest commented on a change in pull request #16215: New ops for RCNN + old
ops improvements for RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215#discussion_r326471319
##
File path: src/operator/contrib/bounding_box-inl.h
##
@@ -787,6 +787,284 @@ void Bipa
access2rohit edited a comment on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533392917
> Ah, I found that the `NaN` handling is provided as another function in
NumPy.
@iblis17 yes also pease take a look at this issu
access2rohit commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533392917
> Ah, I found that the `NaN` handling is provided as another function in
NumPy.
@iblis17 yes also pease take a look at this issue :
https:
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 6247dc8 [Numpy] Differentiable svd (#15795)
add a783d81 add epsilon to sum(pvalue) upperbound (#16211)
haojin2 merged pull request #16211: [Numpy] Fix numerical error in
multinomial's pvalue check
URL: https://github.com/apache/incubator-mxnet/pull/16211
This is an automated message from the Apache Git Service.
To respond to
iblis17 commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533386758
Ah, I found that the `NaN` handling is provided as another function in NumPy.
https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.n
mxnet-label-bot commented on issue #16220: `NDArray.clip()` works very slow in
imperative execution on GPU.
URL:
https://github.com/apache/incubator-mxnet/issues/16220#issuecomment-533385413
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest s
access2rohit commented on issue #16203: removing 64 bit un-used APIs
MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64
URL: https://github.com/apache/incubator-mxnet/pull/16203#issuecomment-533385465
@apeforest @frankfliu PR is ready for review
access2rohit commented on issue #16203: removing 64 bit un-used APIs
MXNDArrayLoadFromBuffer64 and MXNDArrayLoad64
URL: https://github.com/apache/incubator-mxnet/pull/16203#issuecomment-533385383
@mxnet-label-bot add [pr-awaiting-review]
igolan opened a new issue #16220: `NDArray.clip()` works very slow in
imperative execution on GPU.
URL: https://github.com/apache/incubator-mxnet/issues/16220
## Description
`NDArray.clip()` works very slow in imperative execution on GPU (~x3 slower
than ReLU).
More details below
rongzha1 commented on a change in pull request #16195: [mkldnn-v1.0] Add
MKL-DNN activation
URL: https://github.com/apache/incubator-mxnet/pull/16195#discussion_r326451726
##
File path: src/operator/nn/mkldnn/mkldnn_act-inl.h
##
@@ -67,8 +63,28 @@ MKLDNNActForward &GetActF
zachgk opened a new pull request #16219: Faster Scala NDArray to BufferedImage
function
URL: https://github.com/apache/incubator-mxnet/pull/16219
## Description ##
Fixes #15123
The toImage function now runs ~85x faster on the 1024x576px Pug_cookie.jpg
image.
@Chouffe
## Ch
ciyongch commented on a change in pull request #16195: [mkldnn-v1.0] Add
MKL-DNN activation
URL: https://github.com/apache/incubator-mxnet/pull/16195#discussion_r326442272
##
File path: src/operator/nn/mkldnn/mkldnn_act-inl.h
##
@@ -67,8 +63,28 @@ MKLDNNActForward &GetActF
Jerryzcn commented on issue #16215: New ops for RCNN + old ops improvements for
RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215#issuecomment-533361303
> can we have unit tests for `box_encode` and `box_decode`?
sure
eric-haibin-lin commented on issue #15613: [Discussion] 1.5.1 Patch Release
URL:
https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-533360136
If we have a chance for another RC, we should port
https://github.com/apache/incubator-mxnet/pull/15240 and
https://github.com/apa
TristonC commented on issue #16201: using gluon/image_classification.py img/sec
speed up when metric update and reset when turned off
URL:
https://github.com/apache/incubator-mxnet/issues/16201#issuecomment-533359934
The speedup is not real. And it is due to the asynchronous computation of
access2rohit commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533359314
@drivanov can you make changes to GPU part only ? My optimizations are
particularly for CPU and speedup is roughly is 10x in some specific case
CynthiaProtector commented on issue #15962: ResNet18_v2 under the directory of
/mxnet/example/gluon
URL:
https://github.com/apache/incubator-mxnet/issues/15962#issuecomment-533357349
The other reason is that no data augmentation is applied in the training
process, therefore, the final tes
CynthiaProtector commented on issue #15863: Training ResNet-20 with CIFAR-10
does not converge with MXNet1.3.0
URL:
https://github.com/apache/incubator-mxnet/issues/15863#issuecomment-533356488
Thakns, zachgk. I do not build the Resnet-20 on my own, I used the ResNet-20
under the director
wkcn commented on a change in pull request #16218: Improving performance of
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326425576
##
File path: src/operator/tensor/broadcast_reduce_op.h
##
@@ -556,6 +556,162 @@ inline bool ReduceAx
wkcn commented on a change in pull request #16218: Improving performance of
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326428347
##
File path: src/operator/tensor/broadcast_reduce_op_index.cu
##
@@ -43,5 +43,18 @@ NNVM_REGISTER_OP
wkcn commented on a change in pull request #16218: Improving performance of
argmax operator
URL: https://github.com/apache/incubator-mxnet/pull/16218#discussion_r326427720
##
File path: src/operator/tensor/broadcast_reduce_op.h
##
@@ -556,6 +556,162 @@ inline bool ReduceAx
aaronmarkham commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-533353291
Failed again!
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validat
wkcn edited a comment on issue #15836: The pre-built MXNet package on Windows
uses the older master branch
URL:
https://github.com/apache/incubator-mxnet/issues/15836#issuecomment-533345816
Close it since the issue has been solved. The reason is that ‘git pull’
fails in the building syste
wkcn closed issue #15836: The pre-built MXNet package on Windows uses the older
master branch
URL: https://github.com/apache/incubator-mxnet/issues/15836
This is an automated message from the Apache Git Service.
To respond t
wkcn commented on issue #15836: The pre-built MXNet package on Windows uses the
older master branch
URL:
https://github.com/apache/incubator-mxnet/issues/15836#issuecomment-533345816
Close it since the issue has been solved.
Thank @yajiedesign for the fix : )
-
drivanov commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533344638
@access2rohit:
- I was working in the same task and I just filed following PR:
https://github.com/apache/incubator-mxnet/pull/16218.
-Curren
zachgk closed issue #15254:
mxnet(mxnet-full_2.11-linux-x86_64-gpu-1.5.0-SNAPSHOT) cannot support cuda10.1?
URL: https://github.com/apache/incubator-mxnet/issues/15254
This is an automated message from the Apache Git Service
zachgk commented on issue #15254:
mxnet(mxnet-full_2.11-linux-x86_64-gpu-1.5.0-SNAPSHOT) cannot support cuda10.1?
URL:
https://github.com/apache/incubator-mxnet/issues/15254#issuecomment-54199
@tomoncle I am closing this issue because it seems like all the problems
were addressed. Ple
zachgk commented on issue #15863: Training ResNet-20 with CIFAR-10 does not
converge with MXNet1.3.0
URL:
https://github.com/apache/incubator-mxnet/issues/15863#issuecomment-50532
How were you building ResNet and running the training? Can you share the
script/code or a minimal example
drivanov opened a new pull request #16218: Improving performance of argmax
operator
URL: https://github.com/apache/incubator-mxnet/pull/16218
## Description ##
In average, this implementation of `argmax` operator runs 4.2x faster on CPU
and 7.8x faster on GPU, than the previous one.
zachgk commented on issue #16002: How to get sliding window output?
URL:
https://github.com/apache/incubator-mxnet/issues/16002#issuecomment-533327101
You would want to use a for loop to construct the indices input to gather_nd.
```
a = mx.nd.array([0,1,2,3,4,5])
indices = mx.n
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 6af6570 fix flaky test (#16191)
add 6247dc8 [Numpy] Differentiable svd (#15795)
No new revisions
reminisce merged pull request #15795: [Numpy] Differentiable svd
URL: https://github.com/apache/incubator-mxnet/pull/15795
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
aaronmarkham opened a new issue #16217: CI failure -
test_operator_gpu.test_update_ops_mutation
URL: https://github.com/apache/incubator-mxnet/issues/16217
Failed a run on an unrelated update:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/d
mxnet-label-bot commented on issue #16217: CI failure -
test_operator_gpu.test_update_ops_mutation
URL:
https://github.com/apache/incubator-mxnet/issues/16217#issuecomment-533322558
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some label
access2rohit commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was:
APIs that might be a good idea to break in 2.0)
URL:
https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-533319840
we need to fix this issue as well
https://github.com/apache/incubator-mxnet/issues/
access2rohit commented on issue #16216: Inconsistent behaviour nd.argmax
against np.argmax when there are 'nans' in data
URL:
https://github.com/apache/incubator-mxnet/issues/16216#issuecomment-533319492
@reminisce @szha
Th
access2rohit commented on issue #16216: Inconsistent behaviour nd.argmax
against np.argmax when there are 'nans' in data
URL:
https://github.com/apache/incubator-mxnet/issues/16216#issuecomment-533319447
@mxnet-label-bot add [numpy]
mxnet-label-bot commented on issue #16216: Inconsistent behaviour nd.argmax
against np.argmax when there are 'nans' in data
URL:
https://github.com/apache/incubator-mxnet/issues/16216#issuecomment-533318560
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will t
donhk1011 commented on issue #16002: How to get sliding window output?
URL:
https://github.com/apache/incubator-mxnet/issues/16002#issuecomment-533318506
Hi,
Thanks for the reply. Is it possible to use gather_nd with something like
for-loop?
Could you please give me an example of how
access2rohit opened a new issue #16216: Inconsistent behaviour nd.argmax
against np.argmax when there are 'nans' in data
URL: https://github.com/apache/incubator-mxnet/issues/16216
>>> import mxnet as mx
>>> import mxnet.ndarray as nd
>>> x = np.array([[1,5,3],[float('nan'), 2,6]])
zachgk commented on issue #15962: ResNet18_v2 under the directory of
/mxnet/example/gluon
URL:
https://github.com/apache/incubator-mxnet/issues/15962#issuecomment-533315371
Your training accuracy is going to 100 so your model is probably overfitting
the dataset
(https://www.d2l.ai/chapte
zachgk edited a comment on issue #15962: ResNet18_v2 under the directory of
/mxnet/example/gluon
URL:
https://github.com/apache/incubator-mxnet/issues/15962#issuecomment-533315371
Your training accuracy is going to 1.0 so your model is probably overfitting
the dataset
(https://www.d2l.ai
zachgk commented on issue #16002: How to get sliding window output?
URL:
https://github.com/apache/incubator-mxnet/issues/16002#issuecomment-533311290
I don't think so. You could try using gather_nd as a workaround.
This is a
Jerryzcn opened a new pull request #16215: New ops for RCNN + old ops
improvements for RCNN
URL: https://github.com/apache/incubator-mxnet/pull/16215
## Description ##
1. Box Encoder for RCNN
2. Box Decoder for RCNN
3. amp_multicast can cast to narrowest type now
4. roi_align i
zachgk commented on issue #16010: DropConnect Layer
URL:
https://github.com/apache/incubator-mxnet/issues/16010#issuecomment-533298429
The tensorflow workaround you gave is to call dropout on the weight layer
and then multiply by the probability for the dropout layer to undo the weight
ch
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from f5d8fbf New Website: Remove Old Content [2/3] (#15885)
add 6af6570 fix flaky test (#16191)
No new
apeforest merged pull request #16191: Fix flaky test in test_profiler
URL: https://github.com/apache/incubator-mxnet/pull/16191
This is an automated message from the Apache Git Service.
To respond to the message, please log o
reminisce commented on a change in pull request #15902: Numpy add numpy op roll
URL: https://github.com/apache/incubator-mxnet/pull/15902#discussion_r326359220
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -345,5 +346,73 @@ Examples::
.add_argument("data", "NDArra
szha commented on issue #16207: Bump numpy version >=1.17.0
URL: https://github.com/apache/incubator-mxnet/pull/16207#issuecomment-533276349
@leezu FYI
This is an automated message from the Apache Git Service.
To respond to th
perdasilva commented on issue #16202: [CD] Add COMMIT_ID param to release job
URL: https://github.com/apache/incubator-mxnet/pull/16202#issuecomment-533264342
@aaronmarkham also - no problem with you blocking it. I know you want to get
this website released hehehe
-
perdasilva commented on issue #16202: [CD] Add COMMIT_ID param to release job
URL: https://github.com/apache/incubator-mxnet/pull/16202#issuecomment-533264119
@aaronmarkham
**Can you explain removing the git_sha param?**
That was left-over from a previous CD PR. We originally had the
ptrendx commented on a change in pull request #16039: FullyConnected Bias
performance improvement on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16039#discussion_r326333603
##
File path: src/operator/nn/fully_connected-inl.h
##
@@ -169,19 +355,7 @@ void FCBackw
ptrendx commented on a change in pull request #16039: FullyConnected Bias
performance improvement on GPU
URL: https://github.com/apache/incubator-mxnet/pull/16039#discussion_r326332767
##
File path: src/operator/nn/fully_connected-inl.h
##
@@ -169,19 +355,7 @@ void FCBackw
anirudh2290 opened a new issue #16214: test_sync_batchnorm failure on p3.8xlarge
URL: https://github.com/apache/incubator-mxnet/issues/16214
test_sync_batchnorm behaves differently when there are different number of
gpu devices on the machine. It fails on p3.8xlarge but when num_devices are
mxnet-label-bot commented on issue #16214: test_sync_batchnorm failure on
p3.8xlarge
URL:
https://github.com/apache/incubator-mxnet/issues/16214#issuecomment-533260677
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that the
reminisce commented on issue #16209: complex data type support and numpy
operator fft
URL: https://github.com/apache/incubator-mxnet/pull/16209#issuecomment-533257194
Cc @sxjscience who implemented the first FFT version in MXNet.
---
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from a37a76c Float64 fallback for mkldnn subgraph and rnn op (#15853)
add f5d8fbf New Website: Remov
aaronmarkham merged pull request #15885: New Website: Remove Old Content [2/3]
URL: https://github.com/apache/incubator-mxnet/pull/15885
This is an automated message from the Apache Git Service.
To respond to the message, ple
sxjscience commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-533247431
I guess we need to add a test case.
This is an automated message from
zachgk commented on issue #14479: Unable to load mxnet-cu100 on Windows 10
after install
URL:
https://github.com/apache/incubator-mxnet/issues/14479#issuecomment-533246890
You should need the CUDA version to match exactly. So, 10.0 for mxnet-cu100
or 10.1 for mxnet-cu101.
---
sxjscience commented on a change in pull request #16198: [fix] Update
`test_update_ops_mutation` tolerance
URL: https://github.com/apache/incubator-mxnet/pull/16198#discussion_r326306336
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -1887,14 +1888,16 @@ def che
kshitij12345 commented on issue #15331: [fix] missing input log higher order.
URL: https://github.com/apache/incubator-mxnet/pull/15331#issuecomment-533221968
@apeforest @larroy Gentle Ping.
This is an automated message from t
kshitij12345 commented on a change in pull request #16198: [fix] Update
`test_update_ops_mutation` tolerance
URL: https://github.com/apache/incubator-mxnet/pull/16198#discussion_r326269261
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -1887,14 +1888,16 @@ def c
TaoLv opened a new pull request #16213: [mkldnn-v1.0][Don't merge] Trigger CI
after merging the master branch
URL: https://github.com/apache/incubator-mxnet/pull/16213
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please fe
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a change to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 1ff9429 [mkldnn-v1.0] Add MKL-DNN Convolution (#16141)
add e87995d Reducing memory footprint of o
This is an automated email from the ASF dual-hosted git repository.
taolv pushed a commit to branch mkldnn-v1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
commit 99145a5014f62375a36a23da26efd77527385136
Merge: 1ff9429 a37a76c
Author: Tao Lv
AuthorDate: Thu Sep 19 23:18
hzfan commented on a change in pull request #16100: Infra for tvm op runtime
dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16100#discussion_r326247201
##
File path: contrib/tvmop/compile.py
##
@@ -37,23 +40,57 @@ def get_target(device):
parser = argpars
QueensGambit edited a comment on issue #15632: Building MxNet with CPP_PACKAGE
on Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533182265
Hello @JiaoPaner.
Yes, the reason for the higher memory footprint is because the
`-DCMAKE_BUILD_TY
QueensGambit edited a comment on issue #15632: Building MxNet with CPP_PACKAGE
on Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533182265
Hello @JiaoPaner.
Yes, the reason is because the `-DCMAKE_BUILD_TYPE=Release` argument is
missing
QueensGambit edited a comment on issue #15632: Building MxNet with CPP_PACKAGE
on Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533182265
Hello @JiaoPaner.
Yes, the reason is because the `-DCMAKE_BUILD_TYPE=Release` argument is
missing
QueensGambit commented on issue #15632: Building MxNet with CPP_PACKAGE on
Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533182265
Hello @JiaoPaner.
Yes, the reason is because the `-DCMAKE_BUILD_TYPE=Release` argument is
missing in the
JiaoPaner commented on issue #14116: Failure in generated op.h in version 1.3.1
URL:
https://github.com/apache/incubator-mxnet/issues/14116#issuecomment-533179076
Facing the same error when compiling 1.5.0.
Now have it fixed?
JiaoPaner edited a comment on issue #15632: Building MxNet with CPP_PACKAGE on
Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533146349
facing the same error. Is there any other solution?
JiaoPaner commented on issue #15632: Building MxNet with CPP_PACKAGE on
Windows10 (2019-07-23)
URL:
https://github.com/apache/incubator-mxnet/issues/15632#issuecomment-533146349
face the same error. is there any other solution?
-
perdasilva commented on issue #16202: [CD] Add COMMIT_ID param to release job
URL: https://github.com/apache/incubator-mxnet/pull/16202#issuecomment-533145539
@zachgk if this is all good, please merge - the pipeline works. The failures
are either due to flakyness or a persistent problem tha
This is an automated email from the ASF dual-hosted git repository.
anirudh2290 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new d54069d Bump the publish
SayHiRay opened a new pull request #16212: Fix inconsistent interpolation
method values
URL: https://github.com/apache/incubator-mxnet/pull/16212
In cv2, bicubic interpolation should be represented by 2, and area-based
interpolation should be 3. This is correct in the docstring of `imresiz
iblis17 commented on issue #16178: [WIP]improving argmax perf
URL: https://github.com/apache/incubator-mxnet/pull/16178#issuecomment-533119983
The failure case is this:
```julia
julia> x
fantajeon edited a comment on issue #14774: Port binding failed in distributed
training example
URL:
https://github.com/apache/incubator-mxnet/issues/14774#issuecomment-533109093
How about this: DMLC_USE_KUBERNETES=1 .
When ps library binding with address and port, it would set address
fantajeon commented on issue #14774: Port binding failed in distributed
training example
URL:
https://github.com/apache/incubator-mxnet/issues/14774#issuecomment-533109093
How about this: DMLC_USE_KUBERNETES=1 .
When ps library binding with address and port, it would set address =
0.0
pengzhao-intel closed issue #16177: How to dump quantized weights from MKLDNN
as Ndarray
URL: https://github.com/apache/incubator-mxnet/issues/16177
This is an automated message from the Apache Git Service.
To respond to the
pengzhao-intel commented on issue #16177: How to dump quantized weights from
MKLDNN as Ndarray
URL:
https://github.com/apache/incubator-mxnet/issues/16177#issuecomment-533101017
closing sine the question is answered :)
Feel free to reopen or answer more
hgt312 commented on a change in pull request #16009: [Numpy] Numpy compatible
bitwise_and operator
URL: https://github.com/apache/incubator-mxnet/pull/16009#discussion_r326084892
##
File path: src/operator/elemwise_op_common.h
##
@@ -186,6 +186,25 @@ inline bool ElemwiseTy
1 - 100 of 117 matches
Mail list logo