xcwanAndy opened a new issue #14280: Compiling from source code error
URL: https://github.com/apache/incubator-mxnet/issues/14280
## Description
Compiling from source code occurred ***link libzmq*** error.
## Environment info (Required)
```
--Python Info--
ZhennanQin commented on issue #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468169906
@xinyu-intel Please collect
https://github.com/apache/incubator-mxnet/pull/14276 into this PR as @TaoLv
suggests.
mxnet-label-bot commented on issue #14280: Compiling from source code error
URL:
https://github.com/apache/incubator-mxnet/issues/14280#issuecomment-468169831
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that the
ZhennanQin commented on issue #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468169582
Merge with correct order can have same benefit:)
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468168853
Avoiding side effect and keeping master branch healthy is the benefit.
TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding
experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468168321
Not sure I understand the `checkpointing`. Can you explain a bit more? I
think we have memory planning pass
wkcn commented on issue #14268: Add numpy module under root module mxnet
URL:
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468168003
@reminisce
The block in gluon accepts the instance of mx.nd.NDArray or mx.nd.Symbol.
We can distinguish between symbolic and
diandianliu opened a new issue #14279: set mxnet single thread
URL: https://github.com/apache/incubator-mxnet/issues/14279
hi
I run my program on linux, use CPU, call the mxnet.so. I find my
program have many threads, and not call fork an other may have multithread
function. it
ZhennanQin commented on issue #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468164488
@TaoLv Yes, there's side-effect if this is merged before
https://github.com/apache/incubator-mxnet/pull/14276. So
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468162743
@ZhennanQin I'm afraid this PR has side effect if it's merged before #14276
. Reverting should not be a big deal as it
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 03c1f37 Bump the publish
xinyu-intel commented on issue #14274: added mkldnn dependency for plugin
compile target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468161272
@TaoLv Build successfully on my local ubuntu environment.
ZhennanQin commented on issue #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468160543
@TaoLv I suggest not merge them together because this PR is a workaround
while
szha commented on issue #14262: Fix NaN value comparisons in relu, max and min
ops
URL: https://github.com/apache/incubator-mxnet/pull/14262#issuecomment-468159110
@anirudhacharya thanks for the explanation. should relu grad deal with nan
in a special way?
TaoLv commented on issue #14275: Register fake grad to subgraph and quantized
operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468158121
Is it possible to merge this PR into #14276 ? @xinyu-intel @ZhennanQin
@pengzhao-intel
junrushao1994 commented on issue #14253: [RFC] Introducing NumPy-compatible
coding experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468156743
@TaoLv In neural nets, once you do backprop, you cannot overwrite data
because it destroys
TaoLv commented on issue #14274: added mkldnn dependency for plugin compile
target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468155829
@xinyu-intel Could you help to check if this change fixes the issue?
@samskalicky @marcoabreu Is it possible to add the
TaoLv commented on issue #14253: [RFC] Introducing NumPy-compatible coding
experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468155178
@reminisce @szha NumPy has reference/view and stride in its NDArray
structure whille MXNet.NDArray doesn't
Bumblebee1964 commented on issue #14116: Failure in generated op.h in version
1.3.1
URL:
https://github.com/apache/incubator-mxnet/issues/14116#issuecomment-468155093
Please fix, it is annoying to work around this issue especially when trying
to understand cpp samples as these won't
roywei opened a new pull request #14278: use cudnn for dropout by default
URL: https://github.com/apache/incubator-mxnet/pull/14278
## Description ##
enable cuddn for dropout after
https://github.com/apache/incubator-mxnet/pull/13896
## Checklist ##
### Essentials ###
szha commented on issue #14223: fix memory-related issues to enable ASAN tests
URL: https://github.com/apache/incubator-mxnet/pull/14223#issuecomment-468150071
@arcadiaphy thanks! Feel free to PR those changes to the respective repos.
Once merged, you can change the submodules to point to
arcadiaphy commented on issue #14223: fix memory-related issues to enable ASAN
tests
URL: https://github.com/apache/incubator-mxnet/pull/14223#issuecomment-468149375
@szha Everything seems OK now, the only problem is I have changed the code
in the submodule of mshadow and dmlc-core.
ZhennanQin opened a new pull request #14277: Enhance PartitionGraph
URL: https://github.com/apache/incubator-mxnet/pull/14277
## Description ##
Extracted from https://github.com/apache/incubator-mxnet/pull/14113. This PR
covers:
* Add `inference_only` attr support when
JohnLee168 commented on a change in pull request #12456: [MXNET-910]
Multithreading inference.
URL: https://github.com/apache/incubator-mxnet/pull/12456#discussion_r261053384
##
File path: src/c_api/c_predict_api.cc
##
@@ -232,24 +223,117 @@ int
szha commented on issue #13896: Cudnn dropout
URL: https://github.com/apache/incubator-mxnet/pull/13896#issuecomment-468144518
@roywei by default cudnn_off is turned on. You need to turn it off to
benefit from cudnn dropout.
szha commented on issue #13825: Dropout is Slow
URL:
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468144481
@roywei by default cudnn_off is turned on. You need to turn it off to
benefit from cudnn dropout.
reminisce edited a comment on issue #14268: Add numpy module under root module
mxnet
URL:
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468139721
> In gluon, it is available that Block(nd) and Block(sym) are both supported.
Can you elaborate? I don't
roywei commented on issue #13896: Cudnn dropout
URL: https://github.com/apache/incubator-mxnet/pull/13896#issuecomment-468141345
I m not able to get the speed in the test case, see
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468139928
ZhennanQin opened a new pull request #14276: Skip inference only subgraph pass
when gradient is needed.
URL: https://github.com/apache/incubator-mxnet/pull/14276
## Description ##
Skip inference only subgraph pass when gradient is needed. Extracted from
roywei commented on issue #13825: Dropout is Slow
URL:
https://github.com/apache/incubator-mxnet/issues/13825#issuecomment-468139928
after #13896, seems only `mx.gluon.nn.Dropout(0.5)` has performance
improvements, but not mx.nd.Dropout(data, 0.5, mode='always')
following the
reminisce commented on issue #14268: Add numpy module under root module mxnet
URL:
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468139721
> In gluon, it is available that Block(nd) and Block(sym) are both supported.
Can you elaborate? I don't understand this
pengzhao-intel commented on issue #14275: Register fake grad to subgraph and
quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275#issuecomment-468129881
MI, this is a temp solution to enable GluonCV INT8 flow and we will revert
it after the improvement of CachedOP
szha commented on issue #14253: [RFC] Introducing NumPy-compatible coding
experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468126196
@anirudh2290
> Why can't we add the operators under this namespace and make the interface
changes for
JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference
problem
URL:
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
> @JohnLee168 based on my understanding and documentation the
MXPredCreateMultiThread() can be used only when the
JohnLee168 edited a comment on issue #14260: c/c++ multiple threads inference
problem
URL:
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
> @JohnLee168 based on my understanding and documentation the
MXPredCreateMultiThread() can be used only when the
JohnLee168 commented on issue #14260: c/c++ multiple threads inference problem
URL:
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468122798
> @JohnLee168 based on my understanding and documentation the
MXPredCreateMultiThread() can be used only when the EngineType is
anirudh2290 commented on issue #14253: [RFC] Introducing NumPy-compatible
coding experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468122597
Thanks for the RFC!
> It is just that users are encouraged to access NumPy operator APIs through
anirudh2290 edited a comment on issue #14253: [RFC] Introducing
NumPy-compatible coding experience into MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/14253#issuecomment-468122597
Thanks for the RFC!
> It is just that users are encouraged to access NumPy operator APIs
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 7c617cc pypi package description.
szha commented on issue #14255: pypi package description
URL: https://github.com/apache/incubator-mxnet/pull/14255#issuecomment-468118514
cu75 script still works for the older versions so I think we can leave it in
as a record.
szha merged pull request #14255: pypi package description
URL: https://github.com/apache/incubator-mxnet/pull/14255
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub
bj5546 commented on issue #14116: Failure in generated op.h in version 1.3.1
URL:
https://github.com/apache/incubator-mxnet/issues/14116#issuecomment-468118035
release 1.40 has the same erro
This is an automated message from
apeforest commented on issue #14274: added mkldnn dependency for plugin compile
target
URL: https://github.com/apache/incubator-mxnet/pull/14274#issuecomment-468116895
Why do we need warpctc if mxnet already has native implementation?
apeforest commented on issue #14236: Makefile plugins target needs mkldnn
dependency
URL:
https://github.com/apache/incubator-mxnet/issues/14236#issuecomment-468116777
Why do we need warpctc if mxnet already has native implementation?
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261030214
##
File path: include/mxnet/tuple.h
##
@@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache
xinyu-intel opened a new pull request #14275: Register fake grad to subgraph
and quantized operators
URL: https://github.com/apache/incubator-mxnet/pull/14275
## Description ##
**Motivation:**
Register fake grad to subgraph and quantized operators to support loading
back JSON files
GengxinXu commented on issue #14254: MXNET library failing in R
URL:
https://github.com/apache/incubator-mxnet/issues/14254#issuecomment-468107605
> @GengxinXu yes, if you have the mxnet source code built, building the
R-package can be done by following this -
samskalicky opened a new pull request #14274: added mkldnn dependency for
plugin compile target
URL: https://github.com/apache/incubator-mxnet/pull/14274
## Description ##
added mkldnn dependency for "plugin" compile target in Makefile. Without the
change, building fails
junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261017652
##
File path: include/mxnet/tuple.h
##
@@ -0,0 +1,711 @@
+/*
+ * Licensed to the
leleamol commented on issue #14260: c/c++ multiple threads inference problem
URL:
https://github.com/apache/incubator-mxnet/issues/14260#issuecomment-468101141
@JohnLee168 based on my understanding and documentation the
MXPredCreateMultiThread() can be used only when the EngineType is
junrushao1994 commented on a change in pull request #14270: [MXNET-1330] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r261017652
##
File path: include/mxnet/tuple.h
##
@@ -0,0 +1,711 @@
+/*
+ * Licensed to the
wagamama commented on a change in pull request #14222: Add more support for
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261014524
##
File path: tools/coreml/test/test_mxnet_converter.py
##
@@ -192,6 +192,15 @@ def
wagamama commented on a change in pull request #14222: Add more support for
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261014463
##
File path: tools/coreml/test/test_mxnet_converter.py
##
@@ -444,17 +473,39 @@ def
wagamama commented on a change in pull request #14222: Add more support for
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r261013420
##
File path: tools/coreml/test/test_mxnet_converter.py
##
@@ -192,6 +192,15 @@ def
leleamol commented on issue #14106: Question on ResNet C++ example with Cifar10
dataset
URL:
https://github.com/apache/incubator-mxnet/issues/14106#issuecomment-468095295
@mxnet-label-bot add [Pending Requester Info]
This
This is an automated email from the ASF dual-hosted git repository.
zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 99d03d8 Bump the publish
eric-haibin-lin commented on a change in pull request #14240: use safe
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009436
##
File path: src/operator/mxnet_op.h
##
@@ -273,25 +273,42 @@ inline int get_num_threads(const int
eric-haibin-lin commented on a change in pull request #14240: use safe
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009149
##
File path: src/operator/tensor/broadcast_reduce-inl.cuh
##
@@ -610,14 +609,22 @@ void
eric-haibin-lin commented on a change in pull request #14240: use safe
accumulation for norm
URL: https://github.com/apache/incubator-mxnet/pull/14240#discussion_r261009149
##
File path: src/operator/tensor/broadcast_reduce-inl.cuh
##
@@ -610,14 +609,22 @@ void
leleamol commented on issue #14265: Bug in Optimizer's serializeState and
deserializeState methods (Scala)
URL:
https://github.com/apache/incubator-mxnet/issues/14265#issuecomment-468086452
@mxnet-label-bot add [Scala, Bug]
roywei commented on issue #14273: move choose_element_0index to operator
URL: https://github.com/apache/incubator-mxnet/pull/14273#issuecomment-468081913
cc @apeforest
@mxnet-label-bot add [Operator, pr-awaiting-review]
roywei opened a new pull request #14273: move choose_element_0index to operator
URL: https://github.com/apache/incubator-mxnet/pull/14273
## Description ##
fix https://github.com/apache/incubator-mxnet/issues/7853
move legacy choose_element_0index as an alias of pick operator.
It's
wkcn edited a comment on issue #14268: Add numpy module under root module mxnet
URL:
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468076387
I think it is confusing that mx.numpy, mx.nd.numpy and mx.sym.numpy all
exist.
In gluon, it is available that Block(nd)
wkcn commented on issue #14268: Add numpy module under root module mxnet
URL:
https://github.com/apache/incubator-mxnet/issues/14268#issuecomment-468076387
I think it is confusing that mx.numpy, mx.nd.numpy and mx.sym.numpy all
exist.
In gluon, it is available that Block(nd) and
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a change to tag 1.4.0.rc0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
*** WARNING: tag 1.4.0.rc0 was deleted! ***
was c84bb78 Add bug fix #13686 to release note (#13691)
The
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a change to tag 1.4.0.rc2
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
*** WARNING: tag 1.4.0.rc2 was deleted! ***
was e999a46 Use CPUPinned context in ImageRecordIOParser2
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a change to tag 1.4.0.rc1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
*** WARNING: tag 1.4.0.rc1 was deleted! ***
was 45a1554 api change (#13905)
The revisions that were on this
stephenrawls commented on issue #14264: nd.reshape truncate values
URL:
https://github.com/apache/incubator-mxnet/issues/14264#issuecomment-468069818
Not sure about others but I like this behavior.
It allows me to create a maximum-sized array in imperative mode, and
re-shape it to
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a change to tag 1.4.0.rc3
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
*** WARNING: tag 1.4.0.rc3 was deleted! ***
was a03d59e Fix gtest build (#13926)
The revisions that were on
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468066943
@ankkhedia Since you have built cu80 for MXNET R 1.3, would you please help
@piyushghai to build cu80? It is helpful for us to go
This is an automated email from the ASF dual-hosted git repository.
lanking pushed a change to tag 1.4.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
at a03d59e (commit)
No new revisions were added by this update.
mxnet-label-bot commented on issue #14272: [Clojure] Change the NDArray Example
to use the ndarray formatted printer
URL:
https://github.com/apache/incubator-mxnet/issues/14272#issuecomment-468065368
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and
mxnet-label-bot commented on issue #14271: [Clojure] Add helper function to
convert formed vector to NDArray (infers shape)
URL:
https://github.com/apache/incubator-mxnet/issues/14271#issuecomment-468064686
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will
gigasquid opened a new issue #14271: [Clojure] Add helper function to convert
formed vector to NDArray (infers shape)
URL: https://github.com/apache/incubator-mxnet/issues/14271
We have a ndarray function called `array` that will take a 1d clojure vector
and a shape vector and turn it
junrushao1994 commented on issue #14235: /usr/bin/ld: cannot find -lsatlas
URL:
https://github.com/apache/incubator-mxnet/issues/14235#issuecomment-468064619
[These
lines](https://github.com/apache/incubator-mxnet/blob/master/Makefile#L176-L187)
in Makefile checks if LAPACK is properly
junrushao1994 edited a comment on issue #14235: /usr/bin/ld: cannot find
-lsatlas
URL:
https://github.com/apache/incubator-mxnet/issues/14235#issuecomment-468064619
@HaichaoZhu
[These
lines](https://github.com/apache/incubator-mxnet/blob/master/Makefile#L176-L187)
in Makefile
wagamama commented on a change in pull request #14222: Add more support for
mxnet_to_coreml
URL: https://github.com/apache/incubator-mxnet/pull/14222#discussion_r260981801
##
File path: tools/coreml/test/test_mxnet_converter.py
##
@@ -444,17 +473,39 @@ def
piyushghai commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468064377
> If I recall correctly, you need an earlier version of visual studio to
build cu80. Have you tried that?
Yes. I tried with
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468063691
If I recall correctly, you need an earlier version of visual studio to build
cu80. Have you tried that?
eric-haibin-lin merged pull request #14221: [op] add back support for scalar
type rescale_grad argument for adamw_update/mp_adamw_update
URL: https://github.com/apache/incubator-mxnet/pull/14221
This is an automated message
Author: lanking
Date: Wed Feb 27 22:46:58 2019
New Revision: 32684
Log:
update key for Qing Lan
Modified:
release/incubator/mxnet/KEYS
Modified: release/incubator/mxnet/KEYS
==
--- release/incubator/mxnet/KEYS
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new e3a51b5 [op] add back support for
piyushghai commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468057591
> Do we still support `cu80` in 1.4 release?
There were issues when I tried to make the packages for cu80.
I was
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r260969038
##
File path: include/mxnet/tuple.h
##
@@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache
reminisce commented on a change in pull request #14270: [MXNET-1330] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#discussion_r260969465
##
File path: include/mxnet/tuple.h
##
@@ -0,0 +1,711 @@
+/*
+ * Licensed to the Apache
zhreshold commented on a change in pull request #14259: Add Gluon Transformer
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260969948
##
File path: python/mxnet/gluon/data/vision/transforms.py
##
@@ -228,6 +228,54 @@ def forward(self, x):
zhreshold commented on a change in pull request #14259: Add Gluon Transformer
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260970190
##
File path: python/mxnet/gluon/data/vision/transforms.py
##
@@ -228,6 +228,54 @@ def forward(self, x):
zhreshold commented on a change in pull request #14259: Add Gluon Transformer
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260970232
##
File path: python/mxnet/gluon/data/vision/transforms.py
##
@@ -228,6 +228,54 @@ def forward(self, x):
zhreshold commented on a change in pull request #14259: Add Gluon Transformer
Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#discussion_r260971167
##
File path: src/operator/image/crop.cc
##
@@ -0,0 +1,78 @@
+/*
+* Licensed to the Apache Software
szha commented on issue #14247: corrected a spelling
URL: https://github.com/apache/incubator-mxnet/pull/14247#issuecomment-468050552
@pldeepesh would you mind doing a rebase?
```bash
git remote add upstream https://github.com/apache/incubator-mxnet
git pull upstream master
junrushao1994 edited a comment on issue #14217: [DO NOT REVIEW] Bring
nnvm::Tuple to mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14217#issuecomment-468044218
Will send this to master's branch. So closing it for now.
#14270
reminisce commented on issue #14270: [MXNET-1330] Bring nnvm::Tuple to
mxnet::Tuple
URL: https://github.com/apache/incubator-mxnet/pull/14270#issuecomment-468048808
Thanks for being willing to making so many micro-surgical changes scattered
all over the places.
junrushao1994 removed a comment on issue #14266: Move TShape definition and
necessary passes out of NNVM
URL:
https://github.com/apache/incubator-mxnet/issues/14266#issuecomment-468040539
Working on this #14217
This is an
junrushao1994 commented on issue #14266: Move TShape definition and necessary
passes out of NNVM
URL:
https://github.com/apache/incubator-mxnet/issues/14266#issuecomment-468047318
Working on this #14270
This is an
hetong007 commented on issue #14269: Updated docs for R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#issuecomment-468046563
Do we still support `cu80` in 1.4 release?
This is an automated
azai91 commented on issue #14259: Add Gluon Transformer Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#issuecomment-468045975
got it, thanks
This is an automated message from the Apache Git Service.
To respond
piyushghai commented on a change in pull request #14269: Updated docs for
R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#discussion_r260961050
##
File path: docs/install/index.md
##
@@ -1090,7 +1090,8 @@ You can [build MXNet-R from
hetong007 commented on a change in pull request #14269: Updated docs for
R-package installation
URL: https://github.com/apache/incubator-mxnet/pull/14269#discussion_r260960591
##
File path: docs/install/index.md
##
@@ -1090,7 +1090,8 @@ You can [build MXNet-R from
stu1130 edited a comment on issue #14259: Add Gluon Transformer Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#issuecomment-468044964
@azai91 throw MXNet exception
stu1130 commented on issue #14259: Add Gluon Transformer Crop
URL: https://github.com/apache/incubator-mxnet/pull/14259#issuecomment-468044964
@azai91 throw MXNet exception
1 - 100 of 210 matches
Mail list logo