JiangZhaoh opened a new pull request #17255: Set np default dtype( float32 <->
float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inap
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600022
##
File path: src/operator/mxnet_op.h
##
@@ -759,6 +759,17 @@ struct backward_grad {
}
};
+template
+
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600122
##
File path: src/operator/mxnet_op.h
##
@@ -759,6 +759,17 @@ struct backward_grad {
}
};
+template
+
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364600688
##
File path: src/operator/numpy/np_elemwise_unary_op_basic.cc
##
@@ -82,6 +82,39 @@ NNVM_REGISTER_OP(_np_c
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364603602
##
File path: src/operator/numpy/np_elemwise_unary_op_basic.cu
##
@@ -39,6 +39,10 @@ NNVM_REGISTER_OP(_np_c
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364603714
##
File path: src/operator/tensor/elemwise_binary_op.h
##
@@ -525,6 +525,66 @@ class ElemwiseBinaryOp : pub
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604506
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -1832,6 +1832,35 @@ def hybrid_forward(self, F,
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604329
##
File path: src/operator/tensor/elemwise_unary_op_trig.cc
##
@@ -387,6 +409,28 @@
MXNET_OPERATOR_REGISTE
haojin2 commented on a change in pull request #17254: [numpy] change unary
infer type
URL: https://github.com/apache/incubator-mxnet/pull/17254#discussion_r364604576
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -1865,15 +1889,54 @@ def hybrid_forward(self, F,
haojin2 commented on a change in pull request #16152: [Numpy] Random.gamma()
implemented
URL: https://github.com/apache/incubator-mxnet/pull/16152#discussion_r364605387
##
File path: src/operator/numpy/random/np_gamma_op.h
##
@@ -0,0 +1,350 @@
+/*
+ * Licensed to the Apach
marcoabreu commented on a change in pull request #17255: Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#discussion_r364606241
##
File path: include/mxnet/imperative.h
##
@@ -136,6 +137,34 @@ class Imperative {
}
marcoabreu commented on a change in pull request #17255: Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#discussion_r364606683
##
File path: include/mxnet/imperative.h
##
@@ -136,6 +137,34 @@ class Imperative {
}
marcoabreu commented on a change in pull request #17255: Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#discussion_r364606558
##
File path: include/mxnet/imperative.h
##
@@ -136,6 +137,34 @@ class Imperative {
}
JiangZhaoh commented on a change in pull request #17255: Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#discussion_r364612206
##
File path: include/mxnet/imperative.h
##
@@ -136,6 +137,34 @@ class Imperative {
}
JiangZhaoh closed pull request #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255
This is an automated message from the Apache Git Service.
To respond
JiangZhaoh commented on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572460479
> This PR changes so many things and it feels odd. I'm afraid it will cause
quite some confusion when people expl
JiangZhaoh commented on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572460667
> I don't know... Somehow this change feels wrong to me - a default is a
default and people can always specify th
JiangZhaoh removed a comment on issue #17255: [DO NOT REVIEW]Set np default
dtype( float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572460479
> This PR changes so many things and it feels odd. I'm afraid it will cause
quite some confusion when peo
marcoabreu commented on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572464020
My question is rather towards whether the functionality itself will provide
a good user experience - the implemen
haojin2 commented on issue #17255: [DO NOT REVIEW]Set np default dtype( float32
<-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572465096
@marcoabreu
If you explicitly specified some dtype in an op with a `dtype` option, it
means that you're not expe
marcoabreu commented on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572466288
Okay, so there's a functionality in numpy to set the default dtype or is
this approach chosen as a compromise to
haojin2 commented on issue #17255: [DO NOT REVIEW]Set np default dtype( float32
<-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572472159
@marcoabreu
Let's probably take a look at this example to help you understand:
```python
import numpy as np
haojin2 edited a comment on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572472159
@marcoabreu
Let's probably take a look at this example to help you understand:
```python
import numpy
haojin2 edited a comment on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572472159
@marcoabreu
Let's probably take a look at this example to help you understand:
```python
import numpy
marcoabreu commented on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572474784
Thanks for elaborating, makes sense.
Merging it into set_np sounds like a good addition, indeed.
I k
haojin2 commented on issue #17255: [DO NOT REVIEW]Set np default dtype( float32
<-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572477944
@marcoabreu It's really case by case for each op, it should only affect
array creation ops, and this is not the fina
haojin2 edited a comment on issue #17255: [DO NOT REVIEW]Set np default dtype(
float32 <-> float64)
URL: https://github.com/apache/incubator-mxnet/pull/17255#issuecomment-572477944
@marcoabreu It's really case by case for each op, it should only affect
array creation ops, and this is not t
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364656365
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364657942
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364658344
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364658195
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364658691
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364659372
##
File path: src/operator/numpy/np_percentile_op-inl.h
##
@@ -0,0 +1,307 @@
+/*
+ * Licensed to the Apache
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364660488
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -558,6 +558,202 @@ void TopKImpl(const RunContext
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364660646
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -558,6 +558,202 @@ void TopKImpl(const RunContext
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364660946
##
File path: tests/python/unittest/test_numpy_interoperability.py
##
@@ -156,6 +156,20 @@ def _add_workload
haojin2 commented on a change in pull request #17234: Op Quantile/Percentile
[Numpy]
URL: https://github.com/apache/incubator-mxnet/pull/17234#discussion_r364661074
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -5680,6 +5680,49 @@ def test_np_share_memory():
Wallart opened a new issue #17256: Sparse compression causes errors
URL: https://github.com/apache/incubator-mxnet/issues/17256
Hello everyone,
I am trying to use sparse tensors to save memory in my Transformer
architecture and I'm applying F.sparse.cast_storage on an attention weights
leezu merged pull request #17253: Remove the straight dope from nightly test
URL: https://github.com/apache/incubator-mxnet/pull/17253
This is an automated message from the Apache Git Service.
To respond to the message, pleas
This is an automated email from the ASF dual-hosted git repository.
lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 4ff1c67 fix latency calculation and print issue (#17217)
add 83578b9 remove the straight dope from ni
leezu commented on issue #17246: R-package install.packages("mxnet") broken
URL:
https://github.com/apache/incubator-mxnet/issues/17246#issuecomment-572513164
You can try following the compile-from source guide instead:
https://mxnet.apache.org/get_started/ubuntu_setup.html#install-the-mxn
This is an automated email from the ASF dual-hosted git repository.
lausen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 4ff1c67 fix latency calculation and print issue (#17217)
add 83578b9 remove the straight dope from ni
leezu edited a comment on issue #17246: R-package install.packages("mxnet")
broken
URL:
https://github.com/apache/incubator-mxnet/issues/17246#issuecomment-572513164
Until this is fixed, you can try following the compile-from source guide
instead:
https://mxnet.apache.org/get_started/ubu
ronnac opened a new issue #17257: armv81
URL: https://github.com/apache/incubator-mxnet/issues/17257
## Description
The make/config.mk file doesn't account for the armv81 architecture. It
should be treated the same as armv71 or armv61 with regards to USE_SSE=0 and
USE_F16C=0.
###
leezu commented on issue #17257: armv81
URL:
https://github.com/apache/incubator-mxnet/issues/17257#issuecomment-572542740
`make/config.mk` is used for the deprecated Makefile build. Based on your
steps to reproduce, you are using the CMake build.
You could look into the `CMakeLists
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new e191e9e Bump the publis
Justobe opened a new issue #17258: mxnet.base.MXNetError: [12:37:13]
src/executor/../common/exec_utils.h:391: InferShape pass cannot decide shapes
for the following arguments
URL: https://github.com/apache/incubator-mxnet/issues/17258
## Description
I find that MXNET behaves differently
szha opened a new pull request #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259
## Description ##
fix CD pipeline
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ] Changes are complete
Justobe opened a new issue #17260: ValueError: Layer #1 (named
"batch_normalization_1" in the current model) was found to correspond to layer
batch_normalization_1 in the save file. However the new layer
batch_normalization_1 expects 4 weights, but the saved weights have 3 elements.
URL: https:/
ronnac commented on issue #17257: armv81
URL:
https://github.com/apache/incubator-mxnet/issues/17257#issuecomment-572555770
Thank yoou! Maybe the instructions on
https://mxnet.apache.org/get_started/build_from_source need to be
updated.
It states there " For building with Java/Sc
leezu commented on issue #17257: armv81
URL:
https://github.com/apache/incubator-mxnet/issues/17257#issuecomment-572560350
With respect to the armv8 build, take a look at
https://github.com/apache/incubator-mxnet/blob/83578b9a61d5ddb5ed54576fac17e24f97f35e52/ci/docker/runtime_functi
marcoabreu commented on a change in pull request #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259#discussion_r364771567
##
File path: tests/python/mkl/test_mkldnn.py
##
@@ -95,7 +95,7 @@ def __getitem__(self, key):
for _ in loader:
apkuhar commented on issue #15974: USE_NNPACK build flag not honored.
URL:
https://github.com/apache/incubator-mxnet/issues/15974#issuecomment-572620375
I managed to build an older version of mxnet with nnpack, but it was a
couple of times slower than the binary build ob 1.5.1 on raspberr
apkuhar edited a comment on issue #15974: USE_NNPACK build flag not honored.
URL:
https://github.com/apache/incubator-mxnet/issues/15974#issuecomment-572620375
I managed to build an older version of mxnet with nnpack, but it was a
couple of times slower than the binary build ob 1.5.1 on r
codecov-io commented on issue #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259#issuecomment-572623348
#
[Codecov](https://codecov.io/gh/apache/incubator-mxnet/pull/17259?src=pr&el=h1)
Report
> Merging
[#17259](https://codecov.io/gh/apache/incubator
codecov-io edited a comment on issue #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259#issuecomment-572623348
#
[Codecov](https://codecov.io/gh/apache/incubator-mxnet/pull/17259?src=pr&el=h1)
Report
> Merging
[#17259](https://codecov.io/gh/apache/in
ptrendx commented on issue #17209: Remove dtype from Variable created from
Gluon Parameter
URL: https://github.com/apache/incubator-mxnet/pull/17209#issuecomment-572663516
@MoisesHer has an example of a model where removing dtype actually breaks
the transition to symbolic for some reason.
codecov-io removed a comment on issue #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259#issuecomment-572623348
#
[Codecov](https://codecov.io/gh/apache/incubator-mxnet/pull/17259?src=pr&el=h1)
Report
> Merging
[#17259](https://codecov.io/gh/apache/i
stu1130 closed pull request #17248: Run the nightly test against S3 pip wheel
URL: https://github.com/apache/incubator-mxnet/pull/17248
This is an automated message from the Apache Git Service.
To respond to the message, plea
guanxinq commented on a change in pull request #17242: add RandomApply in
gluon's transforms
URL: https://github.com/apache/incubator-mxnet/pull/17242#discussion_r364893422
##
File path: python/mxnet/gluon/data/vision/transforms.py
##
@@ -581,3 +582,28 @@ def hybrid_forwar
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new bc4b360 Bump the publis
apeforest merged pull request #16755: Enabling large tensor support for binary
broadcast operators
URL: https://github.com/apache/incubator-mxnet/pull/16755
This is an automated message from the Apache Git Service.
To respon
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 83578b9 remove the straight dope from nightly test (#17253)
add 6ba9aad Enabling large tensor supp
djaym7 opened a new issue #17261: Gluon adding new parameters to learn -error
URL: https://github.com/apache/incubator-mxnet/issues/17261
I am trying to replicate this pytorch's class, what am i doing wrong here ?
@zhreshold help please.
![image](https://user-images.githubusercont
eric-haibin-lin merged pull request #17235: [DOC] Add a few tips for running
horovod
URL: https://github.com/apache/incubator-mxnet/pull/17235
This is an automated message from the Apache Git Service.
To respond to the messa
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 6ba9aad Enabling large tensor support for binary broadcast operators
(#16755)
add ac88f1e [DOC] Add
ChaiBapchya commented on issue #16898: Sparse int64 Large tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#issuecomment-572720920
> Does
https://cwiki.apache.org/confluence/display/MXNET/Large+Tensor+Support
correctly document the current practice/changes?
Ye
djaym7 commented on issue #17261: Gluon adding new parameters to learn -error
URL:
https://github.com/apache/incubator-mxnet/issues/17261#issuecomment-572729107
Ablation Studies:
1. Without passing p in hybrid_forward gave this error, so followed the
"Dense" layer source code
![ima
MyraBaba commented on issue #17216: how to compile mxnet with cpp and gpu
support in the docker ?
URL:
https://github.com/apache/incubator-mxnet/issues/17216#issuecomment-572732511
So is this include mxnet cpp support and all required include files for cpp
mxnet ?
--
djaym7 commented on issue #17261: Gluon adding new parameters to learn -error
URL:
https://github.com/apache/incubator-mxnet/issues/17261#issuecomment-572754155
Solved
This is an automated message from the Apache Git Service.
djaym7 closed issue #17261: Gluon adding new parameters to learn -error
URL: https://github.com/apache/incubator-mxnet/issues/17261
This is an automated message from the Apache Git Service.
To respond to the message, please l
QueensGambit commented on issue #17216: how to compile mxnet with cpp and gpu
support in the docker ?
URL:
https://github.com/apache/incubator-mxnet/issues/17216#issuecomment-572763504
Yes, the docker file contains a shared object library at:
* /usr/local/lib/libmxnet.so
which wa
cyrusbehr opened a new issue #17262: Unable to build / link mxnet against cuda
10.2
URL: https://github.com/apache/incubator-mxnet/issues/17262
I am trying to build mxnet with cuda 10.2 in a docker container.
For my build, I am using the following docker image from nvidia:
`nvidia/cuda:
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-572773130
@samskalicky @wkcn resolved all the comments!
This is an automated message from t
zhreshold opened a new issue #17263: [mxnet 2.0][item 4.8][RFC] Gluon Data API
Extension and Fixes(Part 1)
URL: https://github.com/apache/incubator-mxnet/issues/17263
## Description
This is the part 1 of Gluon Data API extension and fixes, which mainly focus
on cleaning up diverging
haojin2 commented on a change in pull request #17014: [NumPy] Add NumPy support
for norm
URL: https://github.com/apache/incubator-mxnet/pull/17014#discussion_r364987599
##
File path: src/operator/numpy/linalg/broadcast_reduce_customized-inl.cuh
##
@@ -0,0 +1,416 @@
+/*
+ *
eric-haibin-lin commented on issue #16735: Use single-bit for mask in dropout
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-572787745
For GPT-2, the memory usage goes from 30GB to 26GB. For BERT, it goes from
26GB to 23GB. I didn't notice much difference i
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r364996931
##
File path: src/operator/numpy/linalg/np_eigvals.cc
##
@@ -0,0 +1,124 @@
+/*
+ * Licensed
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r364998303
##
File path: src/operator/numpy/linalg/np_eig.cu
##
@@ -0,0 +1,61 @@
+/*
+ * Licensed to t
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r364998737
##
File path: python/mxnet/symbol/numpy/linalg.py
##
@@ -496,3 +497,265 @@ def tensorsolve(
stu1130 opened a new pull request #17264: Image CenterCrop Op
URL: https://github.com/apache/incubator-mxnet/pull/17264
## Description ##
1. Add image.center_crop op which takes 3-D & 4-D image
2. Make CenterCrop hybridizable
## Checklist ##
### Essentials ###
Please feel
zrsm commented on issue #2838: Cannot find -lcuda? what's wrong?
URL:
https://github.com/apache/incubator-mxnet/issues/2838#issuecomment-572795545
> libcuda.so is under /usr/local/cuda/lib64/stubs/libcuda.so, just make a
soft link to /usr/local/cuda/lib64/libcuda.so. I am not sure if this
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r365001641
##
File path: src/operator/numpy/linalg/np_eigvals.cc
##
@@ -0,0 +1,124 @@
+/*
+ * Licensed
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r365001641
##
File path: src/operator/numpy/linalg/np_eigvals.cc
##
@@ -0,0 +1,124 @@
+/*
+ * Licensed
haojin2 commented on a change in pull request #17188: [Numpy] Add
linalg.eig/eigh/eigvals/eigvalsh op
URL: https://github.com/apache/incubator-mxnet/pull/17188#discussion_r365002633
##
File path: src/operator/numpy/linalg/np_eig.cc
##
@@ -0,0 +1,157 @@
+/*
+ * Licensed to
roywei commented on issue #17250: mxnet.base.MXNetError: Error in operator
transpose176: [02:14:06] src/operator/tensor/./matrix_op-inl.h:354: Check
failed: shp.ndim() == param.axes.ndim() (-1 vs. 4)
URL:
https://github.com/apache/incubator-mxnet/issues/17250#issuecomment-572799442
Hi @Ju
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial
doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365006013
##
File path: example/extensions/lib_custom_op/README.md
##
@@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial
doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365006466
##
File path: example/extensions/lib_custom_op/README.md
##
@@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial
doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365007081
##
File path: example/extensions/lib_custom_op/README.md
##
@@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+
samskalicky commented on a change in pull request #17241: Add CustomOp tutorial
doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#discussion_r365007081
##
File path: example/extensions/lib_custom_op/README.md
##
@@ -0,0 +1,83 @@
+CustomOp Example and Tutorial
+
rondogency commented on issue #17241: Add CustomOp tutorial doc
URL: https://github.com/apache/incubator-mxnet/pull/17241#issuecomment-572807610
@eric-haibin-lin @aaronmarkham can you also take a quick look at the doc,
thanks!
---
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 3170a97 Bump the publis
apeforest edited a comment on issue #16735: Use single-bit for mask in dropout
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-572830100
@TaoLv Thanks for your review. I ran operator profiling using
benchmark.opperf.utils.benchmark_utils.run_performance_test
apeforest commented on issue #16735: Use single-bit for mask in dropout operator
URL: https://github.com/apache/incubator-mxnet/pull/16735#issuecomment-572830100
@TaoLv Thanks for your review. I ran operator profining using
benchmark.opperf.utils.benchmark_utils.run_performance_test. The re
szha commented on a change in pull request #17259: [CD] fix CD pipeline
URL: https://github.com/apache/incubator-mxnet/pull/17259#discussion_r365044385
##
File path: tests/python/mkl/test_mkldnn.py
##
@@ -95,7 +95,7 @@ def __getitem__(self, key):
for _ in loader:
szha commented on a change in pull request #16408: Add MXNet Ops for fast
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r365049085
##
File path: tests/python/gpu/test_operator_gpu.py
##
@@ -2493,13 +2493,327 @@ def test_arange_lik
Justobe commented on issue #17250: mxnet.base.MXNetError: Error in operator
transpose176: [02:14:06] src/operator/tensor/./matrix_op-inl.h:354: Check
failed: shp.ndim() == param.axes.ndim() (-1 vs. 4)
URL:
https://github.com/apache/incubator-mxnet/issues/17250#issuecomment-572847927
Hello
Justobe edited a comment on issue #17250: mxnet.base.MXNetError: Error in
operator transpose176: [02:14:06] src/operator/tensor/./matrix_op-inl.h:354:
Check failed: shp.ndim() == param.axes.ndim() (-1 vs. 4)
URL:
https://github.com/apache/incubator-mxnet/issues/17250#issuecomment-572847927
hanke580 commented on a change in pull request #17171: [Numpy] add row_stack
(=vstack)
URL: https://github.com/apache/incubator-mxnet/pull/17171#discussion_r365056395
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -3685,6 +3685,51 @@ def get_list(arrays):
retur
ChaiBapchya commented on a change in pull request #16898: Sparse int64 Large
tensor support
URL: https://github.com/apache/incubator-mxnet/pull/16898#discussion_r365056993
##
File path: src/operator/tensor/cast_storage-inl.h
##
@@ -283,14 +283,14 @@ struct CopyCsrDataToDns
1 - 100 of 120 matches
Mail list logo