apeforest commented on issue #15288: [MXNET-978] Higher order gradient for
sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#issuecomment-503887003
@larroy @sxjscience Please help review this PR. Thanks!
This
apeforest commented on issue #15288: [MXNET-978] Higher order gradient for
sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#issuecomment-503886931
@kshitij12345 I have figured out how backward works when one of the inputs
is an output of the forward node. Please review
apeforest opened a new pull request #15288: [MXNET-978] Higher order gradient
for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288
## Description ##
This PR adds support of higher order gradient for sigmoid operator.
## Checklist ##
### Essentials ###
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652707
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652673
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652643
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652620
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652590
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
hzfan commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295652568
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
frischzenger opened a new issue #15287: pretrained model
URL: https://github.com/apache/incubator-mxnet/issues/15287
the pretrained model link is no longer exist,
data.dmlc.ml/mxnet/models
is there a valid link?
This is
pengzhao-intel commented on issue #15233: [doc] FusedRNNCell support cpu
URL:
https://github.com/apache/incubator-mxnet/issues/15233#issuecomment-503881689
@pengxin99 would you mind file a PR to improve the doc?
This is an
ChaiBapchya commented on issue #14354: [WIP] Ndarray cumsum
URL: https://github.com/apache/incubator-mxnet/pull/14354#issuecomment-503879401
Apologies for delay. Out on vacation. Will resume from July 4.
This is an automated
Author: zhasheng
Date: Thu Jun 20 04:30:22 2019
New Revision: 34586
Log:
update mxnet-1.5.0.rc1
Added:
dev/incubator/mxnet/1.5.0.rc1/
dev/incubator/mxnet/1.5.0.rc1/apache-mxnet-src-1.5.0.rc1-incubating.tar.gz
(with props)
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to annotated tag 1.5.0.rc1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
*** WARNING: tag 1.5.0.rc1 was modified! ***
from e83e110 (tag)
to 820a6de (tag)
tagging
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch v1.5.x
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from ccbbf6b Fix java install docs (#15250)
add 6f60b9b fix span issue on tutorial index (#15279)
szha merged pull request #15278: fixing var-seq-len rnn backward() operator
URL: https://github.com/apache/incubator-mxnet/pull/15278
This is an automated message from the Apache Git Service.
To respond to the message,
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 4d96671 fixing var-seq-len rnn
szha commented on issue #14619: [Discussion] 1.5.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-503829786
@vafl duplicate name issue should have been fixed already.
This is an automated
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295611772
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295611691
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295611332
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295611556
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
stu1130 opened a new pull request #15286: numpy compatible amin
URL: https://github.com/apache/incubator-mxnet/pull/15286
## Description ##
numpy compatible amin
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [x] Changes
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295602961
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295601171
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295601109
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295600995
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295600932
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295600804
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295600915
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15258: Numpy Trace
URL: https://github.com/apache/incubator-mxnet/pull/15258#discussion_r295600686
##
File path: src/operator/numpy/np_trace_op-inl.h
##
@@ -0,0 +1,255 @@
+/*
+ * Licensed to the Apache Software Foundation
haojin2 commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295600470
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
haojin2 commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295599839
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
haojin2 commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295599813
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
haojin2 commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r295599764
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -580,18 +580,37 @@ void ArgSort(const nnvm::NodeAttrs&
xianyujie commented on issue #15275: How to run mxnet(C++) in single-thread
mode?
URL:
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-503820927
@ElaineBao yes, it works, but sometimes the number of threads increased, I'm
retesting if it was an error caused by my
anirudh2290 commented on a change in pull request #15285: [WIP] Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#discussion_r295597038
##
File path: src/common/directed_graph.h
##
@@ -0,0 +1,201 @@
+ /*
Review comment:
Is this PR trying to
Zha0q1 opened a new pull request #15210: Custom Operator Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210
## Description ##
fix: https://github.com/apache/incubator-mxnet/issues/15241
I have implemented the new feature.
Need to add test cases.
Zha0q1 closed pull request #15210: Custom Operator Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210
This is an automated message from the Apache Git Service.
To respond to the message, please
ElaineBao commented on issue #15275: How to run mxnet(C++) in single-thread
mode?
URL:
https://github.com/apache/incubator-mxnet/issues/15275#issuecomment-503816142
Hi @xianyujie, I tried the solution mentioned in your email, and confirm
that with:
```
compile mxnet with OPENMP=0
frischzenger commented on issue #15238: about train imagenet 1k
URL:
https://github.com/apache/incubator-mxnet/issues/15238#issuecomment-503811000
i have found the error was caused by wrong image label. so I correct the
right labels and now train is 78,but the val is 69% still lower than
frischzenger commented on issue #15238: about train imagenet 1k
URL:
https://github.com/apache/incubator-mxnet/issues/15238#issuecomment-503810430
@leleamol
python train_imagenet.py --data-train ./train/imagenet_train.rec --data-val
./val/imagenet_val.rec --network resnet --num-layers
jinfei3459 commented on issue #14643: Check failed: inputs[i]->ctx() ==
default_ctx (cpu(0) vs. gpu(0)) CachedOp requires all inputs to live on the
same context. But data is on gpu(0) while conv0_weight is on cpu(0)
URL:
jinfei3459 closed issue #14643: Check failed: inputs[i]->ctx() == default_ctx
(cpu(0) vs. gpu(0)) CachedOp requires all inputs to live on the same context.
But data is on gpu(0) while conv0_weight is on cpu(0)
URL: https://github.com/apache/incubator-mxnet/issues/14643
larroy commented on issue #15285: [WIP] Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285#issuecomment-503809100
https://github.com/apache/incubator-mxnet/issues/15198
This is an automated message from
larroy commented on issue #14619: [Discussion] 1.5.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-503808389
I guess depends on the GAN, as you could have any layer, so if you want to
use GAN with convs you need higher order for conv...
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295586564
##
File path: tests/python/gpu/test_contrib_amp.py
##
@@ -0,0 +1,255 @@
+# Licensed to
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295586465
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed to
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295586321
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed to
anirudh2290 commented on issue #15118: Conversion from FP32 model to Mixed
Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#issuecomment-503807605
@ptrendx @pengzhao-intel @samskalicky @larroy @ZhennanQin Thank you for your
review ! I have addressed your comments.
larroy opened a new pull request #15285: [WIP] Graph dumper
URL: https://github.com/apache/incubator-mxnet/pull/15285
## Description ##
Utility to dump the computational graph for human consumption.
Dumps the graph to a dot file that can be rendered.
## Checklist ##
stephenrawls commented on issue #15278: fixing var-seq-len rnn backward()
operator
URL: https://github.com/apache/incubator-mxnet/pull/15278#issuecomment-503806754
Looks like test failed due to tolerance issue. (I tried with same exact
random seed on my compute instance and test passed
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295585017
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed
pengzhao-intel commented on issue #15164: [C++] Improve inference script to
support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#issuecomment-503806307
@anirudh2290 @ZhennanQin @ciyongch please review again if all concerns are
resolved :)
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295584534
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -810,6 +810,156 @@ int
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295584306
##
File path: python/mxnet/module/executor_group.py
##
@@ -651,6 +652,20 @@ def
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r295584177
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -810,6 +810,156 @@ int
pengzhao-intel commented on issue #15230: Updating SymbolBlock.imports to
support different dtypes
URL: https://github.com/apache/incubator-mxnet/pull/15230#issuecomment-503804873
LGTM
Sorry for the late reply.
This
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new d70a54b Bump the publish
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from eb48370 Added transform tutorial (#15114)
add 145f82d Updating SymbolBlock.imports to support
pengzhao-intel commented on issue #15281: Gluon Inference failed
URL:
https://github.com/apache/incubator-mxnet/issues/15281#issuecomment-503802314
@ZhennanQin will look into the issue. Thank to let us know :)
This is an
szha merged pull request #15230: Updating SymbolBlock.imports to support
different dtypes
URL: https://github.com/apache/incubator-mxnet/pull/15230
This is an automated message from the Apache Git Service.
To respond to the
access2rohit commented on a change in pull request #15170: [WIP] [MXNET-1413]
Adding Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r295579368
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -819,7
access2rohit commented on a change in pull request #15170: [WIP] [MXNET-1413]
Adding Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r295579398
##
File path: tests/python/unittest/test_ndarray.py
##
@@ -860,7
anirudh2290 opened a new issue #15284: Count Sketch Backward, CUDA Memcheck
failures
URL: https://github.com/apache/incubator-mxnet/issues/15284
## Description
There are cuda memcheck failures for count sketch backward that need to be
addressed. There is an invalid read of size 4 bytes
mxnet-label-bot commented on issue #15284: Count Sketch Backward, CUDA Memcheck
failures
URL:
https://github.com/apache/incubator-mxnet/issues/15284#issuecomment-503797542
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
anandj91 commented on issue #15124: [MXNET-1294] Priority-based parameter
propagation for improved data parallel training throughput
URL: https://github.com/apache/incubator-mxnet/pull/15124#issuecomment-503795371
Modified the code to address the review comments. Sorry for the delay.
I
stephenrawls commented on issue #15278: fixing var-seq-len rnn backward()
operator
URL: https://github.com/apache/incubator-mxnet/pull/15278#issuecomment-503788708
@roywei @szha
Okay I think the PR is good now.
The problem was indeed what I speculated before: the cudnn
mxnet-label-bot commented on issue #15283: mxnet.ndarray.contrib.boolean_mask
running on gpu arrays randomly throws an CUDA illegal memory accessed error
URL:
https://github.com/apache/incubator-mxnet/issues/15283#issuecomment-503784598
Hey, this is the MXNet Label Bot.
Thank you for
kalpitdixit opened a new issue #15283: mxnet.ndarray.contrib.boolean_mask
running on gpu arrays randomly throws an CUDA illegal memory accessed error
URL: https://github.com/apache/incubator-mxnet/issues/15283
## Description
mxnet.ndarray.contrib.boolean_mask running on gpu arrays
ptrendx commented on a change in pull request #15167: [WIP] Pointwise fusion
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r295559143
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software
stephenrawls edited a comment on issue #15278: fixing var-seq-len rnn
backward() operator
URL: https://github.com/apache/incubator-mxnet/pull/15278#issuecomment-503774511
Just to keep the ticket updated:
I have confirmed the following facts:
1. If I set each sequence_length
stephenrawls commented on issue #15278: fixing var-seq-len rnn backward()
operator
URL: https://github.com/apache/incubator-mxnet/pull/15278#issuecomment-503774511
Just to keep the ticket updated:
I have confirmed the following facts:
1. If I set each sequence_length entry to
ptrendx commented on a change in pull request #15167: [WIP] Pointwise fusion
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r295553783
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software
ptrendx commented on a change in pull request #15167: [WIP] Pointwise fusion
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r295553056
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software
ptrendx commented on a change in pull request #15167: [WIP] Pointwise fusion
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r295552815
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software
ptrendx commented on a change in pull request #15167: [WIP] Pointwise fusion
for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r295552867
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software
ThomasDelteil merged pull request #15114: Added transform tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15114
This is an automated message from the Apache Git Service.
To respond to the message, please log on
This is an automated email from the ASF dual-hosted git repository.
thomasdelteil pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 6f60b9b fix span issue on tutorial index (#15279)
add eb48370 Added transform tutorial
ThomasDelteil commented on issue #15230: Updating SymbolBlock.imports to
support different dtypes
URL: https://github.com/apache/incubator-mxnet/pull/15230#issuecomment-503771160
@pengzhao-intel is that good for merging?
ThomasDelteil commented on issue #15279: Span issue on tutorial index
URL: https://github.com/apache/incubator-mxnet/pull/15279#issuecomment-503770839
verified on the docs that it fixes the bug :+1:
This is an automated
This is an automated email from the ASF dual-hosted git repository.
thomasdelteil pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 6f60b9b fix span issue on
ThomasDelteil merged pull request #15279: Span issue on tutorial index
URL: https://github.com/apache/incubator-mxnet/pull/15279
This is an automated message from the Apache Git Service.
To respond to the message, please log
stu1130 opened a new pull request #15282: Numpy compatible eye
URL: https://github.com/apache/incubator-mxnet/pull/15282
## Description ##
Numpy compatible eye
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [x] Changes are
lanking520 closed issue #14260: c/c++ multiple threads inference problem
URL: https://github.com/apache/incubator-mxnet/issues/14260
This is an automated message from the Apache Git Service.
To respond to the message, please
Zha0q1 commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295547034
##
File path: src/profiler/profiler.h
##
@@ -1149,8 +1158,15 @@ struct ProfileOperator : public
stu1130 closed pull request #15276: Numpy compatible eye
URL: https://github.com/apache/incubator-mxnet/pull/15276
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
Zha0q1 commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295543127
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
drivanov commented on a change in pull request #14443: Mxnet allclose
URL: https://github.com/apache/incubator-mxnet/pull/14443#discussion_r295539990
##
File path: tests/python/gpu/test_gluon_gpu.py
##
@@ -454,10 +522,7 @@ def get_net(num_ops):
drivanov commented on a change in pull request #14443: Mxnet allclose
URL: https://github.com/apache/incubator-mxnet/pull/14443#discussion_r295539990
##
File path: tests/python/gpu/test_gluon_gpu.py
##
@@ -454,10 +522,7 @@ def get_net(num_ops):
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295537564
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295537564
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295537274
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295536264
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295536264
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,115 @@
+/*
+* Licensed to the Apache
piyushghai commented on issue #15254:
mxnet(mxnet-full_2.11-linux-x86_64-gpu-1.5.0-SNAPSHOT) cannot support cuda10.1?
URL:
https://github.com/apache/incubator-mxnet/issues/15254#issuecomment-503755798
@tomoncle I had a look at your code processing code. It seems like the
OpenCV
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295533238
##
File path: src/profiler/profiler.h
##
@@ -1149,8 +1158,15 @@ struct ProfileOperator : public
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295532838
##
File path: src/profiler/profiler.h
##
@@ -1149,8 +1158,15 @@ struct ProfileOperator : public
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295531937
##
File path: src/engine/threaded_engine.cc
##
@@ -333,9 +333,14 @@ void
apeforest commented on a change in pull request #15210: Custom Operator
Profiling Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r295531841
##
File path: src/engine/naive_engine.cc
##
@@ -159,16 +160,21 @@ class NaiveEngine final : public
1 - 100 of 172 matches
Mail list logo