apeforest commented on a change in pull request #15288: [MXNET-978] Higher
order gradient for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#discussion_r296117019
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -121,7 +121,30 @@ The st
apeforest commented on a change in pull request #15288: [MXNET-978] Higher
order gradient for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#discussion_r296115952
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -121,7 +121,30 @@ The st
apeforest commented on a change in pull request #15288: [MXNET-978] Higher
order gradient for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#discussion_r296115493
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -121,7 +121,30 @@ The st
gyshi closed pull request #15305: Numpy bitwise_xor operator
URL: https://github.com/apache/incubator-mxnet/pull/15305
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Git
mikemwx commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r296110575
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -539,7 +566,6 @@ def expand_dims(a, axis):
"""
return
wuxun-zhang commented on a change in pull request #15164: [C++] Improve
inference script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296105294
##
File path: cpp-package/example/inference/README.md
##
@@ -30,34 +30,
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296104360
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -310,6 +364,47 @@ NNVM_REGISTER_OP(_backward_np_concat)
.set_at
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296104231
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -858,6 +858,73 @@ def get_new_shape(shape, axis):
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296102328
##
File path: cpp-package/example/inference/README.md
##
@@ -30,34 +30,116 @@
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296104167
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -858,6 +858,73 @@ def get_new_shape(shape, axis):
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296103928
##
File path: cpp-package/example/inference/unit_test_imagenet_inference.sh
##
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296103372
##
File path: cpp-package/example/inference/unit_test_imagenet_inference.sh
##
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103949
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -858,6 +858,73 @@ def get_new_shape(shape, axis):
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296101873
##
File path: cpp-package/example/inference/README.md
##
@@ -30,34 +30,116 @@
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296102328
##
File path: cpp-package/example/inference/README.md
##
@@ -30,34 +30,116 @@
TaoLv commented on a change in pull request #15164: [C++] Improve inference
script to support benchmark on Imagenet
URL: https://github.com/apache/incubator-mxnet/pull/15164#discussion_r296103433
##
File path: cpp-package/example/inference/unit_test_imagenet_inference.sh
##
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103759
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -310,6 +364,47 @@ NNVM_REGISTER_OP(_backward_np_concat)
.set_at
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103874
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -858,6 +858,73 @@ def get_new_shape(shape, axis):
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103723
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -310,6 +364,47 @@ NNVM_REGISTER_OP(_backward_np_concat)
.set_at
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103529
##
File path: src/operator/numpy/np_matrix_op.cc
##
@@ -310,6 +364,47 @@ NNVM_REGISTER_OP(_backward_np_concat)
.set_at
haojin2 commented on a change in pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302#discussion_r296103381
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -1404,6 +1404,23 @@ def stack(arrays, axis=0, out=None):
retu
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296103023
##
File path: src/operator/numpy/np_elemwise_binary_op.cc
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache So
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296102606
##
File path: src/operator/numpy/np_elemwise_binary_op.cu
##
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache So
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296102526
##
File path: src/operator/numpy/np_elemwise_binary_op.cu
##
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache So
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296102300
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -948,6 +948,72 @@ def hybrid_forward(self, F, x):
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296102367
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -948,6 +948,72 @@ def hybrid_forward(self, F, x):
haojin2 commented on a change in pull request #15305: Numpy bitwise_xor
operator
URL: https://github.com/apache/incubator-mxnet/pull/15305#discussion_r296102398
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -948,6 +948,72 @@ def hybrid_forward(self, F, x):
anirudh2290 commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504295569
With the PR : #15298 also it segfaults and core dumps.
This is an automated
gyshi opened a new pull request #15305: Numpy bitwise_xor operator
URL: https://github.com/apache/incubator-mxnet/pull/15305
## Description ##
implemented the numpy bitwise_xor operator. Support for broadcasting
implemented and tested.
## Checklist ##
### Essentials ###
Pl
adis300 commented on issue #15301: Ignore generated nnvm.cc template. This file
is created whenever `make` is run
URL: https://github.com/apache/incubator-mxnet/pull/15301#issuecomment-504293244
Reopening the pull request does not help.
-
adis300 closed pull request #15301: Ignore generated nnvm.cc template. This
file is created whenever `make` is run
URL: https://github.com/apache/incubator-mxnet/pull/15301
This is an automated message from the Apache Git Se
anirudh2290 commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504292979
@pengzhao-intel @roywei I am currently building with @ZhennanQin and will
try it out.
--
anirudh2290 edited a comment on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504292979
@pengzhao-intel @roywei I am currently building with @ZhennanQin cmmit and
will try it out.
-
kshitij12345 commented on a change in pull request #15288: [MXNET-978] Higher
order gradient for sigmoid
URL: https://github.com/apache/incubator-mxnet/pull/15288#discussion_r296098176
##
File path: src/operator/tensor/elemwise_unary_op_basic.cc
##
@@ -121,7 +121,30 @@ The
anirudh2290 commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504291257
did you modify the mxnet version in requirements file ?
This is an automated
roywei commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504290732
Hi @anirudh2290 @ptrendx
I m still not able to reproduce the crash. My steps are below, could you
help point out what's wrong?
Mach
tomoncle commented on issue #15254:
mxnet(mxnet-full_2.11-linux-x86_64-gpu-1.5.0-SNAPSHOT) cannot support cuda10.1?
URL:
https://github.com/apache/incubator-mxnet/issues/15254#issuecomment-504288762
Thank you all so much for your warm support, including many constructive
advice.
lanking520 commented on issue #15254:
mxnet(mxnet-full_2.11-linux-x86_64-gpu-1.5.0-SNAPSHOT) cannot support cuda10.1?
URL:
https://github.com/apache/incubator-mxnet/issues/15254#issuecomment-504285666
For image preprocessing, you can use your personal processing toolkit. MXNet
provide bui
tuskiomi opened a new issue #15304: Mxnet fails to properly parallelize with a
ryzen CPU.
URL: https://github.com/apache/incubator-mxnet/issues/15304
## Description
With a Rysen 7 2700 CPU, the libmxnet DLL fails to properly parallelize. At
any one time, it's using only one core of
mxnet-label-bot commented on issue #15304: Mxnet fails to properly parallelize
with a ryzen CPU.
URL:
https://github.com/apache/incubator-mxnet/issues/15304#issuecomment-504280330
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels
adis300 opened a new pull request #15303: Fix amalgamation failure.
URL: https://github.com/apache/incubator-mxnet/pull/15303
Referring to issue
[#14808](https://github.com/apache/incubator-mxnet/issues/14808)
amalgamation predict api fails when loading a network. The error messages
haojin2 commented on a change in pull request #15277: [Numpy] Numpy argsort
URL: https://github.com/apache/incubator-mxnet/pull/15277#discussion_r296085060
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -539,7 +566,6 @@ def expand_dims(a, axis):
"""
return
wkcn commented on issue #15287: pretrained model
URL:
https://github.com/apache/incubator-mxnet/issues/15287#issuecomment-504273462
Thank you for pointing it out!
It seems that the link `dmlc.ml` is out of date.
We can replace `data.dmlc.ml` with `data.mxnet.io` temporarily.
e.
mikemwx opened a new pull request #15302: [Numpy] Numpy hstack
URL: https://github.com/apache/incubator-mxnet/pull/15302
## Description ##
Add numpy compatible hstack with kernels of concat
[Numpy
hstack](https://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html)
## C
adis300 opened a new pull request #15301: Ignore generated nnvm.cc template.
This file is created whenever `make` is run
URL: https://github.com/apache/incubator-mxnet/pull/15301
Ignore generated nnvm.cc template in amalgamation dir.
haojin2 closed pull request #15291: Numpy
URL: https://github.com/apache/incubator-mxnet/pull/15291
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
szha commented on issue #15106: NER example: fix metrics computation
URL: https://github.com/apache/incubator-mxnet/pull/15106#issuecomment-504269199
@WilliamTambellini I triggered these builds again.
This is an automated mess
adis300 commented on issue #15060: Ignore generated nnvm.cc
URL: https://github.com/apache/incubator-mxnet/pull/15060#issuecomment-504269139
@piyushghai I will open another item. According to the CI information. The
failure is not caused by this PR.
ZhennanQin opened a new pull request #15300: point fix the vector declaration
in MultiBoxDetection
URL: https://github.com/apache/incubator-mxnet/pull/15300
## Description ##
vector `reserve` won't change vector size, so it's not correct to visit
vector with index after `reserve`.
adis300 opened a new pull request #15299: Typo fix in plan_memory relase ->
release.
URL: https://github.com/apache/incubator-mxnet/pull/15299
Typo fix in plan_memory relase -> release.
This is an automated message from the A
Zha0q1 opened a new pull request #15240: Fixing duplication in operator
profiling
URL: https://github.com/apache/incubator-mxnet/pull/15240
## Description ##
fix: https://github.com/apache/incubator-mxnet/issues/10520
fix: https://github.com/apache/incubator-mxnet/issues/15243
For
Zha0q1 closed pull request #15240: Fixing duplication in operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15240
This is an automated message from the Apache Git Service.
To respond to the message, please
pengzhao-intel commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504264473
@anirudh2290 thanks for the issue.
Could you help to take a quick try for the fix of #15298 ?
ZhennanQin opened a new pull request #15298: Fix Cached_op with
static_shape=true
URL: https://github.com/apache/incubator-mxnet/pull/15298
## Description ##
Should address https://github.com/apache/incubator-mxnet/issues/15281
@pengzhao-intel @TaoLv @junrushao1994 @zheng-da
szha closed issue #15268: Backward doesn't work on LSTM with sequence_length
URL: https://github.com/apache/incubator-mxnet/issues/15268
This is an automated message from the Apache Git Service.
To respond to the message, ple
mxnet-label-bot commented on issue #15297: Sockeye failure with MXNet
URL:
https://github.com/apache/incubator-mxnet/issues/15297#issuecomment-504254511
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so
that the appropriate MXN
anirudh2290 opened a new issue #15297: Sockeye failure with MXNet
URL: https://github.com/apache/incubator-mxnet/issues/15297
## Description
Install sockeye and run python setup.py test.
Change line in requirements.txt and requirements.gpu-cu100.txt and change
mxnet version to nightly
This is an automated email from the ASF dual-hosted git repository.
marcoabreu pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new dd3156d Bump the publish
szha commented on issue #15064: [Fit API] Update fit method and LoggingHandler
URL: https://github.com/apache/incubator-mxnet/pull/15064#issuecomment-504250505
@abhinavs95 `train_epoch`'s granularity is too coarse to ensure that
registered handlers work correctly.
-
szha commented on issue #14619: [Discussion] 1.5.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/14619#issuecomment-504249531
@vafl yes what I meant is that 1.5.0 will include the fix. If you use the
nightly package of mxnet you will see that the included code example is pa
samskalicky opened a new issue #15296: Graph Executor does not track mutable
input dependencies
URL: https://github.com/apache/incubator-mxnet/issues/15296
## Description
In the Symbol/Module flow, the graph executor does not track mutable input
dependencies when setting up the use_vars
mxnet-label-bot commented on issue #15296: Graph Executor does not track
mutable input dependencies
URL:
https://github.com/apache/incubator-mxnet/issues/15296#issuecomment-504245287
Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labe
larroy commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296058696
##
File path: tests/python/unittest/test_profiler.py
##
@@ -269,6 +270,134 @@ def check_sorting(debug_
apeforest commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296058604
##
File path: tests/nightly/test_large_array.py
##
@@ -326,6 +326,33 @@ def t
larroy commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296057374
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,93 @@
+/*
+* Licensed to the Apache Soft
larroy commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296057374
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,93 @@
+/*
+* Licensed to the Apache Soft
larroy commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296057374
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,93 @@
+/*
+* Licensed to the Apache Soft
anirudh2290 commented on a change in pull request #15118: Conversion from FP32
model to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296057985
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed
larroy commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296057374
##
File path: src/profiler/custom_op_profiler.h
##
@@ -0,0 +1,93 @@
+/*
+* Licensed to the Apache Soft
access2rohit commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296055046
##
File path: tests/nightly/test_large_array.py
##
@@ -326,6 +326,30 @@ de
access2rohit commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296055099
##
File path: tests/nightly/test_large_array.py
##
@@ -326,6 +326,33 @@ de
access2rohit commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296055046
##
File path: tests/nightly/test_large_array.py
##
@@ -326,6 +326,30 @@ de
Zha0q1 opened a new pull request #15240: Fixing duplication in operator
profiling
URL: https://github.com/apache/incubator-mxnet/pull/15240
## Description ##
fix: https://github.com/apache/incubator-mxnet/issues/10520
fix: https://github.com/apache/incubator-mxnet/issues/15243
For
Zha0q1 closed pull request #15240: Fixing duplication in operator profiling
URL: https://github.com/apache/incubator-mxnet/pull/15240
This is an automated message from the Apache Git Service.
To respond to the message, please
anirudhacharya commented on issue #14956: mxnet.image.imread can not correctly
read jpg with orientation
URL:
https://github.com/apache/incubator-mxnet/issues/14956#issuecomment-504239776
here is a weird discrepancy I am facing when using opencv with python and
with c++ -
```python
larroy commented on issue #10988: Flaky test: test_operator_gpu.test_countsketch
URL:
https://github.com/apache/incubator-mxnet/issues/10988#issuecomment-504239437
But all of them are failing, might not be specific to this one, seems memory
corruption or hardware issue.
--
This is an automated email from the ASF dual-hosted git repository.
dickjc123 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 2de0db0 Showing proper error when csr array is not 2D in shape.
(#15242)
add c45d23b Proper bulki
apeforest commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296053650
##
File path: tests/nightly/test_large_array.py
##
@@ -326,6 +326,33 @@ def t
larroy commented on a change in pull request #15167: [WIP] Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r296053557
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software Fo
DickJC123 merged pull request #15272: Proper bulking of ops not using FCompute
URL: https://github.com/apache/incubator-mxnet/pull/15272
This is an automated message from the Apache Git Service.
To respond to the message, ple
DickJC123 commented on issue #15272: Proper bulking of ops not using FCompute
URL: https://github.com/apache/incubator-mxnet/pull/15272#issuecomment-504238596
LGTM.
This is an automated message from the Apache Git Service.
To
larroy commented on a change in pull request #15167: [WIP] Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r296053378
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,534 @@
+/*
+ * Licensed to the Apache Software Fo
larroy commented on a change in pull request #15167: [WIP] Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r296053117
##
File path: src/operator/fusion/fused_op-inl.h
##
@@ -0,0 +1,906 @@
+/*
+ * Licensed to the Apache Software
larroy commented on a change in pull request #15167: [WIP] Pointwise fusion for
GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#discussion_r296051789
##
File path: src/operator/fusion/fused_op.cu
##
@@ -0,0 +1,534 @@
+/*
+ * Licensed to the Apache Software Fo
larroy commented on a change in pull request #15170: [MXNET-1413] Adding Large
Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296048091
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -605,30 +633,32 @@ voi
larroy commented on issue #14779: Fully connected, higher order grad
URL: https://github.com/apache/incubator-mxnet/pull/14779#issuecomment-504231919
@apeforest @kshitij12345 is this good to merge?
This is an automated message
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296043486
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -810,6 +810,191 @@ int MXQuantizeSymbol(
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296043719
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,261 @@
+/*
+ * Licensed to th
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296043276
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,265 @@
+/*
+ * Licensed to th
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296043389
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -810,6 +810,191 @@ int MXQuantizeSymbol(
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296044987
##
File path: src/c_api/c_api_symbolic.cc
##
@@ -810,6 +810,191 @@ int MXQuantizeSymbol(
larroy commented on a change in pull request #15118: Conversion from FP32 model
to Mixed Precision model
URL: https://github.com/apache/incubator-mxnet/pull/15118#discussion_r296043802
##
File path: src/nnvm/low_precision_pass.cc
##
@@ -0,0 +1,261 @@
+/*
+ * Licensed to th
Roshrini commented on issue #15280: Fixed C++ inference tutorial
URL: https://github.com/apache/incubator-mxnet/pull/15280#issuecomment-504227427
@NRauschmayr Thank you for fixing the example. Can you please fix failing
CI?
@leleamol Can you take a look?
---
leleamol commented on issue #15261: C++ Can't load model with Symbol::Load when
using static library
URL:
https://github.com/apache/incubator-mxnet/issues/15261#issuecomment-504227144
I investigate further. I noticed that using only libmxnet.a for linking may
not be enough.
There are o
stu1130 commented on a change in pull request #15282: Numpy compatible eye
URL: https://github.com/apache/incubator-mxnet/pull/15282#discussion_r296040469
##
File path: src/operator/numpy/np_init_op.h
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (
access2rohit commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296037846
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -605,30 +633,32
apeforest commented on a change in pull request #15170: [MXNET-1413] Adding
Large Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296036559
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -605,30 +633,32 @@
larroy commented on a change in pull request #15170: [MXNET-1413] Adding Large
Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296036072
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -605,30 +633,32 @@ voi
Zha0q1 commented on a change in pull request #15210: Custom Operator Profiling
Enhancement
URL: https://github.com/apache/incubator-mxnet/pull/15210#discussion_r296036218
##
File path: src/profiler/profiler.h
##
@@ -1149,8 +1158,15 @@ struct ProfileOperator : public Profil
larroy commented on a change in pull request #15170: [MXNET-1413] Adding Large
Tensor support for sort operators
URL: https://github.com/apache/incubator-mxnet/pull/15170#discussion_r296036072
##
File path: src/operator/tensor/ordering_op-inl.h
##
@@ -605,30 +633,32 @@ voi
1 - 100 of 234 matches
Mail list logo