This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 32674c3 Bump the publis
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373754277
##
File path: benchmark/opperf/utils/op_registry_utils.py
##
@@ -223,13 +223,16 @@ def
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373754176
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -19,12
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373754190
##
File path: benchmark/opperf/rules/default_params.py
##
@@ -58,6 +58,11 @@
DEFAULT_
ChaiBapchya commented on issue #17483: Tests failed when I try to build
scala-package from source
URL:
https://github.com/apache/incubator-mxnet/issues/17483#issuecomment-580985504
@zachgk @lanking520 any idea?
This is an au
connorgoggins commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753855
##
File path: benchmark/opperf/utils/op_registry_utils.py
##
@@ -23,8 +23,7 @@
from
connorgoggins commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753831
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -19,
connorgoggins commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753647
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -19,
connorgoggins commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753647
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -19,
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753403
##
File path: benchmark/opperf/utils/op_registry_utils.py
##
@@ -223,13 +226,17 @@ def
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753338
##
File path: benchmark/opperf/utils/op_registry_utils.py
##
@@ -23,8 +23,7 @@
from b
ChaiBapchya commented on a change in pull request #17502: [OpPerf] Implement
remaining random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#discussion_r373753313
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -19,12
connorgoggins opened a new pull request #17502: [OpPerf] Implement remaining
random sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502
## Description ##
This PR serves to implement the remaining operators from the Random Sampling
category (`BilinearSampler`, `GridGen
connorgoggins commented on issue #17502: [OpPerf] Implement remaining random
sampling ops
URL: https://github.com/apache/incubator-mxnet/pull/17502#issuecomment-580983205
@mxnet-label-bot add [pr-awaiting-review]
This is an a
leezu commented on a change in pull request #15969: Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373748228
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,22 @@ def _build_cache(self, *args):
samskalicky commented on a change in pull request #15969: Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373748008
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,22 @@ def _build_cache(self, *args):
samskalicky commented on a change in pull request #15969: Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373747516
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
import cty
larroy commented on a change in pull request #15969: Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373747103
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
import ctypes
larroy commented on a change in pull request #15969: Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373745860
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
import ctypes
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373743013
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373743196
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373722105
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,22 @@ def _build_cache(self, *args):
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373742941
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373743118
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373742675
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373721138
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,22 @@ def _build_cache(self, *args):
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373742768
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
samskalicky commented on a change in pull request #15969: [WIP] Partitioning
Gluon HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373743077
##
File path: tests/python/unittest/test_subgraph_op.py
##
@@ -18,351 +18,448 @@
import os
impo
connorgoggins commented on issue #17501: [OpPerf] Implement remaining GEMM ops
URL: https://github.com/apache/incubator-mxnet/pull/17501#issuecomment-580970918
@mxnet-label-bot add [pr-awaiting-review]
This is an automated mes
connorgoggins commented on issue #17475: Implement remaining nn_activation ops
in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17475#issuecomment-580971022
@mxnet-label-bot add [pr-awaiting-review]
This is an au
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 507fa8e Bump the publis
connorgoggins opened a new pull request #17501: [OpPerf] Implement remaining
gemm ops
URL: https://github.com/apache/incubator-mxnet/pull/17501
## Description ##
This PR serves to implement the remaining operators from the gemm category
in opperf. To achieve this, I added a call to `run
ChaiBapchya commented on issue #17444: [Large Tensor] Add LT support for NN
optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#issuecomment-580964144
```
>>> import mxnet as mx
>>> from mxnet.test_utils import *
>>> w = rand_ndarray((2**3
ChaiBapchya commented on a change in pull request #17482: [OpPerf] Add Neural
network loss ops
URL: https://github.com/apache/incubator-mxnet/pull/17482#discussion_r373736222
##
File path: benchmark/opperf/utils/profiler_utils.py
##
@@ -48,8 +48,8 @@ def _get_operator_prof
ChaiBapchya commented on a change in pull request #17482: [OpPerf] Add Neural
network loss ops
URL: https://github.com/apache/incubator-mxnet/pull/17482#discussion_r373736222
##
File path: benchmark/opperf/utils/profiler_utils.py
##
@@ -48,8 +48,8 @@ def _get_operator_prof
connorgoggins opened a new pull request #17500: [OpPerf] Implement remaining
nn_conv ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17500
## Description ##
This PR serves to implement the remaining operators from the nn_conv
category in opperf. To achieve this, I adde
ChaiBapchya commented on issue #14329: [Flaky] flaky test in
test_operator_gpu.test_convolution_multiple_streams
URL:
https://github.com/apache/incubator-mxnet/issues/14329#issuecomment-580953668
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gp
guanxinq commented on a change in pull request #15969: [WIP] Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373715658
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,18 @@ def _build_cache(self, *args):
guanxinq commented on a change in pull request #15969: [WIP] Partitioning Gluon
HybridBlocks
URL: https://github.com/apache/incubator-mxnet/pull/15969#discussion_r373715632
##
File path: python/mxnet/gluon/block.py
##
@@ -954,6 +955,18 @@ def _build_cache(self, *args):
larroy commented on issue #16654: Multithreaded Inference Support
URL: https://github.com/apache/incubator-mxnet/pull/16654#issuecomment-580935239
Did you run performance benchmarks to see that there's no regression? as I
understand this is a change that could have an impact on performance
apeforest commented on a change in pull request #17482: [OpPerf] Add Neural
network loss ops
URL: https://github.com/apache/incubator-mxnet/pull/17482#discussion_r373709329
##
File path: benchmark/opperf/utils/profiler_utils.py
##
@@ -48,8 +48,8 @@ def _get_operator_profil
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707739
##
File path: benchmark/opperf/nd_operations/sorting_searching_operators.py
##
@@ -39,6
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707761
##
File path: benchmark/opperf/nd_operations/unary_operators.py
##
@@ -45,6 +45,8 @@ def
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707696
##
File path: benchmark/opperf/nd_operations/reduction_operators.py
##
@@ -41,6 +41,8 @@
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707535
##
File path: benchmark/opperf/nd_operations/nn_conv_operators.py
##
@@ -60,131 +81,286
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707455
##
File path: benchmark/opperf/nd_operations/nn_conv_operators.py
##
@@ -60,131 +81,286
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707656
##
File path: benchmark/opperf/nd_operations/random_sampling_operators.py
##
@@ -44,6 +4
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707604
##
File path: benchmark/opperf/nd_operations/nn_optimizer_operators.py
##
@@ -46,6 +46,8
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707500
##
File path: benchmark/opperf/nd_operations/nn_conv_operators.py
##
@@ -60,131 +81,286
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707309
##
File path: benchmark/opperf/nd_operations/gemm_operators.py
##
@@ -55,33 +57,62 @@ de
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707375
##
File path: benchmark/opperf/nd_operations/nn_basic_operators.py
##
@@ -29,58 +29,132
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373707350
##
File path: benchmark/opperf/nd_operations/nn_activation_operators.py
##
@@ -55,55 +57
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373706497
##
File path: benchmark/opperf/nd_operations/nn_activation_operators.py
##
@@ -45,6 +45,
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373706254
##
File path: benchmark/opperf/nd_operations/gemm_operators.py
##
@@ -55,33 +57,62 @@ de
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373705848
##
File path: benchmark/opperf/nd_operations/binary_operators.py
##
@@ -48,6 +48,8 @@ de
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373706067
##
File path: benchmark/opperf/nd_operations/gemm_operators.py
##
@@ -44,6 +44,8 @@ def
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373705920
##
File path: benchmark/opperf/nd_operations/binary_operators.py
##
@@ -75,6 +77,8 @@ de
apeforest commented on a change in pull request #17449: Implemented large
tensor flag for opperf testing
URL: https://github.com/apache/incubator-mxnet/pull/17449#discussion_r373705688
##
File path: benchmark/opperf/nd_operations/array_rearrange.py
##
@@ -39,6 +39,8 @@ def
szhengac edited a comment on issue #17444: [Large Tensor] Add LT support for NN
optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#issuecomment-580919548
> So I tested MXNet (build from source using this branch)
> with flags :
>
> ```
szhengac commented on issue #17444: [Large Tensor] Add LT support for NN
optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#issuecomment-580919548
> So I tested MXNet (build from source using this branch)
> with flags :
>
> ```
> pyth
ChaiBapchya commented on issue #17444: [Large Tensor] Add LT support for NN
optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#issuecomment-580917968
@mxnet-label-bot add [pr-awaiting-review]
@apeforest
> @ChaiBapchya can you paste the
szhengac commented on a change in pull request #17444: [Large Tensor] Add LT
support for NN optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#discussion_r373685495
##
File path: src/operator/optimizer_op-inl.h
##
@@ -225,7 +225,
haojin2 commented on issue #17384: Error in numpy unit test
URL:
https://github.com/apache/incubator-mxnet/issues/17384#issuecomment-580906058
Since it's an issue related to numpy itself, we can probably change our test
to skip this test for earlier numpy versions without having to change
haojin2 opened a new pull request #17499: skip flaky
test_convolution_multiple_streams
URL: https://github.com/apache/incubator-mxnet/pull/17499
## Description ##
As title.
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
- [ ]
ChaiBapchya commented on a change in pull request #17444: [Large Tensor] Add LT
support for NN optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#discussion_r373674101
##
File path: src/operator/optimizer_op-inl.h
##
@@ -225,7 +2
ChaiBapchya commented on issue #17444: [Large Tensor] Add LT support for NN
optimizers and 1 activation function
URL: https://github.com/apache/incubator-mxnet/pull/17444#issuecomment-580900241
So I tested MXNet (build from source using this branch)
with flags :
```
python -c "fro
fomkin commented on issue #17483: Tests failed when I try to build
scala-package from source
URL:
https://github.com/apache/incubator-mxnet/issues/17483#issuecomment-580884164
I've got same result when compile libmxnet with `make`.
```
make USE_CUDA=1 USE_CUDA_PATH=/usr/local/cud
samskalicky commented on issue #17486: Update CustomOp doc with changes for GPU
support
URL: https://github.com/apache/incubator-mxnet/pull/17486#issuecomment-580872254
@rondogency please add the fix for the gpu example test program here from
https://github.com/apache/incubator-mxnet/pull/
ChaiBapchya commented on issue #11395: Flaky test:
test_operator_gpu.test_sequence_last causes 'CUDA: unspecified launch failure'
URL:
https://github.com/apache/incubator-mxnet/issues/11395#issuecomment-580868191
I haven't found one. I kept retriggering and finally gave up.
--
aaronmarkham opened a new issue #17498: CI test timeout for Unix-CPU Python 3
debug
URL: https://github.com/apache/incubator-mxnet/issues/17498
## Description
The test is timing out at 4 hours.
Perhaps this test needs to be broken out into smaller parts? Or some parts
can be removed?
aaronmarkham commented on issue #11395: Flaky test:
test_operator_gpu.test_sequence_last causes 'CUDA: unspecified launch failure'
URL:
https://github.com/apache/incubator-mxnet/issues/11395#issuecomment-580854806
Unspecified launch failure here:
http://jenkins.mxnet-ci.amazon-ml.com/b
leezu opened a new issue #17497: Website: Separate master and stable versions
of website
URL: https://github.com/apache/incubator-mxnet/issues/17497
## Description
We may want a separate built pipeline and website version for each stable
branch + master.
## References
For exam
leezu commented on issue #7443: tf.boolean_mask equivalent in MxNet
URL:
https://github.com/apache/incubator-mxnet/issues/7443#issuecomment-580832724
As part of https://github.com/apache/incubator-mxnet/issues/14253 there is
support / is in progress
---
leezu opened a new pull request #17496: Fix typo in ubuntu_setup.md
URL: https://github.com/apache/incubator-mxnet/pull/17496
This is an automated message from the Apache Git Service.
To respond to the message, please log on
ChaiBapchya edited a comment on issue #17331: [mxnet 2.0] [item 2.4] Turning on
large tensor support by default
URL:
https://github.com/apache/incubator-mxnet/issues/17331#issuecomment-580146186
[OpPerf] : Indexing Ops https://github.com/apache/incubator-mxnet/pull/16253
[OpPerf] : Neur
ChaiBapchya edited a comment on issue #17331: [mxnet 2.0] [item 2.4] Turning on
large tensor support by default
URL:
https://github.com/apache/incubator-mxnet/issues/17331#issuecomment-580146186
[OpPerf] : Indexing Ops https://github.com/apache/incubator-mxnet/pull/16253
[OpPerf] : Neur
samskalicky commented on issue #16794: Random rotation
URL: https://github.com/apache/incubator-mxnet/pull/16794#issuecomment-580830593
restarted
This is an automated message from the Apache Git Service.
To respond to the mess
EletronicElephant commented on issue #7443: tf.boolean_mask equivalent in MxNet
URL:
https://github.com/apache/incubator-mxnet/issues/7443#issuecomment-580765537
More than two years later, it seems like MXNet still doesn't support Boolean
Index?
---
chouxianyu edited a comment on issue #14875: MXNet to ONNX export bug
URL:
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-580600315
I met the same problem.
And I tried the solution in
PR[#14942](https://github.com/apache/incubator-mxnet/pull/14942), found a new
bug
marcoabreu commented on a change in pull request #15990: Remove python2 from CI
URL: https://github.com/apache/incubator-mxnet/pull/15990#discussion_r373417381
##
File path: ci/docker/install/ubuntu_core.sh
##
@@ -21,6 +21,7 @@
# the whole docker cache for the image
set
lkubin commented on issue #16794: Random rotation
URL: https://github.com/apache/incubator-mxnet/pull/16794#issuecomment-58011
@zhreshold It seems that now only linux GPU tests are not passing. Can you
restart only those tests to see if they will pass the CI?
-
larroy commented on a change in pull request #13916: Static build for Python
URL: https://github.com/apache/incubator-mxnet/pull/13916#discussion_r373390745
##
File path: ci/publish/python/build.sh
##
@@ -0,0 +1,26 @@
+#!/usr/bin/env bash
+#
+# Licensed to the Apache Softwa
82 matches
Mail list logo