[GitHub] [incubator-mxnet] wkcn commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
wkcn commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550971205 I agree to turn the fused_op off until fused_op is stable, and use the environmental variable to switch it. The reason is that users couldn't use

[GitHub] [incubator-mxnet] knjwhn opened a new issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
knjwhn opened a new issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749 Hello everyone. I implemented a gemm multiply function for u8s8s32 and s8s8s32 data types , and the function interface are same as openblas , I want to us

[GitHub] [incubator-mxnet] TaoLv commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
TaoLv commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550978412 +1 This is an automated message from the Apache Git Service. To respond to the mes

[GitHub] [incubator-mxnet] knjwhn commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
knjwhn commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-550979571 https://github.com/apache/incubator-mxnet/blob/c38b52784325380f79cafb9f0407ad327554fe6b/src/operator/quantization/quantized_fully_con

[GitHub] [incubator-mxnet] TaoLv commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
TaoLv commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-550984158 Are you running experiments or targeting for upstreaming in the future? If just experiments, I think you can directly replace the `cbl

[GitHub] [incubator-mxnet] knjwhn commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
knjwhn commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-550989453 thanks ! I build MXNet with USE_BLAS=openblas , because of my target machine is arm64 so I can't using mkl or mkldnn, and by avoidi

[GitHub] [incubator-mxnet] wkcn edited a comment on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
wkcn edited a comment on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-550971205 I agree to turn the fused_op off by default until fused_op is stable. The reason is that users couldn't use the 1.6.0 release if it is not

[GitHub] [incubator-mxnet] TaoLv commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
TaoLv commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-550998461 I see. Given you already have INT8 GEMM, so I would suggest you start from the normal FP32 FullyConnected operator and replace its imp

[GitHub] [incubator-mxnet] marcoabreu commented on issue #16412: Cleanup output of docker cache generation

2019-11-07 Thread GitBox
marcoabreu commented on issue #16412: Cleanup output of docker cache generation URL: https://github.com/apache/incubator-mxnet/pull/16412#issuecomment-551013603 Thanks, I'd prefer to not surpress the output and not move forward with the change --

[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16184: Add large tensor nightly tests for MKL-DNN operators

2019-11-07 Thread GitBox
wuxun-zhang commented on issue #16184: Add large tensor nightly tests for MKL-DNN operators URL: https://github.com/apache/incubator-mxnet/pull/16184#issuecomment-551030416 Output log of all mkl-dnn operators (with `export MKLDNN_VERBOSE=1` ): ``` test_large_array_mkldnn.test_Full

[GitHub] [incubator-mxnet] knjwhn commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
knjwhn commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-551031875 Thanks a lot !I will try that.and is there have some study resource of how to use quantization/dequantization code? What's more I st

[GitHub] [incubator-mxnet] Alicia1529 opened a new pull request #16750: [Numpy] add custom op sort

2019-11-07 Thread GitBox
Alicia1529 opened a new pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750 ## Description ## add custom NumPy operator sort This is an automated message from the Ap

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16731: [WIP] Static link MKL-DNN library

2019-11-07 Thread GitBox
pengzhao-intel commented on issue #16731: [WIP] Static link MKL-DNN library URL: https://github.com/apache/incubator-mxnet/pull/16731#issuecomment-551052887 @Taolv could you give a simple summary of the reason for this change and any possible impacts? --

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-07 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 1f2ed68 Bump the publis

[GitHub] [incubator-mxnet] linven0721 opened a new issue #16751: Run Java example, GPU, Memory leak

2019-11-07 Thread GitBox
linven0721 opened a new issue #16751: Run Java example, GPU, Memory leak URL: https://github.com/apache/incubator-mxnet/issues/16751 ## Description When I tried to run the Java predictor example([https://github.com/apache/incubator-mxnet/blob/master/scala-package/examples/src/main/java/o

[GitHub] [incubator-mxnet] TaoLv commented on issue #16731: [WIP] Static link MKL-DNN library

2019-11-07 Thread GitBox
TaoLv commented on issue #16731: [WIP] Static link MKL-DNN library URL: https://github.com/apache/incubator-mxnet/pull/16731#issuecomment-551106435 @pengzhao-intel , sure, PR description is added. This is an automated message

[GitHub] [incubator-mxnet] TaoLv commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
TaoLv commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-551112964 MXNet INT8 Conv doesn't call GEMM directly. It leverages the Conv APIs from MKL-DNN/cuDNN. --

[GitHub] [incubator-mxnet] TaoLv commented on issue #16731: [WIP] Static link MKL-DNN library

2019-11-07 Thread GitBox
TaoLv commented on issue #16731: [WIP] Static link MKL-DNN library URL: https://github.com/apache/incubator-mxnet/pull/16731#issuecomment-551126579 @szha @lanking520 @perdasilva @marcoabreu Could you please review, especially the parts about CD and packaging? --

[GitHub] [incubator-mxnet] JONGGON opened a new issue #16752: Gluon hybridize is not perfect.

2019-11-07 Thread GitBox
JONGGON opened a new issue #16752: Gluon hybridize is not perfect. URL: https://github.com/apache/incubator-mxnet/issues/16752 ## Description (A clear and concise description of what the bug is.) I am implementing yolov3. If hybridize is not active, no problem will occur. Whe

[GitHub] [incubator-mxnet] reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551173088 I'm personally not fond of using the operator `reset_array` for some specific purpose. First of all, it do

[GitHub] [incubator-mxnet] reminisce edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
reminisce edited a comment on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551173088 I'm personally not fond of using the operator `reset_array` for some specific purpose. First of all

[GitHub] [incubator-mxnet] alopez1327 opened a new issue #16753: fail to build using docker

2019-11-07 Thread GitBox
alopez1327 opened a new issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753 ## Description Tried building library using docker through the command: _python3 ci/build.py -p armv7_ but compilation failed because it seems docker is pulling s

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16750: [Numpy] add custom op sort

2019-11-07 Thread GitBox
reminisce commented on a change in pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750#discussion_r343782233 ## File path: python/mxnet/ndarray/numpy/_op.py ## @@ -344,6 +344,133 @@ def take(a, indices, axis=None, mode=

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16750: [Numpy] add custom op sort

2019-11-07 Thread GitBox
reminisce commented on a change in pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750#discussion_r343784245 ## File path: python/mxnet/numpy_op_fallback.py ## @@ -49,6 +49,48 @@ def _register_helper(prop_cls): ret

[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 commented on a change in pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343786519 ## File path: src/operator/slice_channel-inl.h ## @@ -176,16 +177,22 @@ class SliceChannelProp : public

[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 commented on a change in pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#discussion_r343786780 ## File path: tests/python/gpu/test_contrib_amp.py ## @@ -475,6 +475,15 @@ def test_fp16_casting():

[GitHub] [incubator-mxnet] Wallart opened a new issue #16754: Is mirroring working with MXNet 1.5.1 Gluon ?

2019-11-07 Thread GitBox
Wallart opened a new issue #16754: Is mirroring working with MXNet 1.5.1 Gluon ? URL: https://github.com/apache/incubator-mxnet/issues/16754 Hello everyone, I am currently implementing a Sparse/LogSparse transformer. I would like some layers results to be re-computed at backward pass t

[GitHub] [incubator-mxnet] sxjscience commented on issue #16576: [Numpy][Bug] einsum bug

2019-11-07 Thread GitBox
sxjscience commented on issue #16576: [Numpy][Bug] einsum bug URL: https://github.com/apache/incubator-mxnet/issues/16576#issuecomment-551203792 I've rerun the script and can confirm that @hzfan's commit has solved the problem. I'll later try to use einsum in the implementation of neural a

[GitHub] [incubator-mxnet] sxjscience closed issue #16576: [Numpy][Bug] einsum bug

2019-11-07 Thread GitBox
sxjscience closed issue #16576: [Numpy][Bug] einsum bug URL: https://github.com/apache/incubator-mxnet/issues/16576 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub

[GitHub] [incubator-mxnet] sxjscience commented on issue #16752: Gluon hybridize is not perfect.

2019-11-07 Thread GitBox
sxjscience commented on issue #16752: Gluon hybridize is not perfect. URL: https://github.com/apache/incubator-mxnet/issues/16752#issuecomment-551205420 @JONGGON There are a few know inconsistent behaviors the HybridBlock: https://github.com/apache/incubator-mxnet/issues/16279, https://gi

[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16752: Gluon hybridize is not perfect.

2019-11-07 Thread GitBox
sxjscience edited a comment on issue #16752: Gluon hybridize is not perfect. URL: https://github.com/apache/incubator-mxnet/issues/16752#issuecomment-551205420 @JONGGON There are a few know inconsistent behaviors the HybridBlock: https://github.com/apache/incubator-mxnet/issues/16279, htt

[GitHub] [incubator-mxnet] sxjscience commented on issue #12268: Inconsistent type conversion from numpy.ndarray to mx.ndarray

2019-11-07 Thread GitBox
sxjscience commented on issue #12268: Inconsistent type conversion from numpy.ndarray to mx.ndarray URL: https://github.com/apache/incubator-mxnet/issues/12268#issuecomment-551207085 This issue is solved by the new Numpy NDArray in MXNet: ```python import mxnet as mx import numpy

[GitHub] [incubator-mxnet] sxjscience closed issue #3660: [OP] Convention for the `axis` argument

2019-11-07 Thread GitBox
sxjscience closed issue #3660: [OP] Convention for the `axis` argument URL: https://github.com/apache/incubator-mxnet/issues/3660 This is an automated message from the Apache Git Service. To respond to the message, please log

[GitHub] [incubator-mxnet] sxjscience closed issue #12268: Inconsistent type conversion from numpy.ndarray to mx.ndarray

2019-11-07 Thread GitBox
sxjscience closed issue #12268: Inconsistent type conversion from numpy.ndarray to mx.ndarray URL: https://github.com/apache/incubator-mxnet/issues/12268 This is an automated message from the Apache Git Service. To respond t

[GitHub] [incubator-mxnet] sxjscience commented on issue #3822: [OP] index_fill and index_add operators

2019-11-07 Thread GitBox
sxjscience commented on issue #3822: [OP] index_fill and index_add operators URL: https://github.com/apache/incubator-mxnet/issues/3822#issuecomment-551208268 The new numpy ndarray supports this feature. This is an automated

[GitHub] [incubator-mxnet] sxjscience closed issue #3822: [OP] index_fill and index_add operators

2019-11-07 Thread GitBox
sxjscience closed issue #3822: [OP] index_fill and index_add operators URL: https://github.com/apache/incubator-mxnet/issues/3822 This is an automated message from the Apache Git Service. To respond to the message, please log

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-07 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 02d2b16 Bump the publis

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16750: [Numpy] add custom op sort

2019-11-07 Thread GitBox
reminisce commented on a change in pull request #16750: [Numpy] add custom op sort URL: https://github.com/apache/incubator-mxnet/pull/16750#discussion_r343814159 ## File path: python/mxnet/numpy/multiarray.py ## @@ -2276,6 +2276,127 @@ def take(a, indices, axis=None, mode

[GitHub] [incubator-mxnet] alopez1327 commented on issue #16753: fail to build using docker

2019-11-07 Thread GitBox
alopez1327 commented on issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753#issuecomment-551216461 Tried to fix the error by adding: _apt update && apt-key update && apt install -y --no-install-recommends_ to deb.ubuntu.ccache.sh be

[GitHub] [incubator-mxnet] reminisce commented on a change in pull request #16660: [WIP] [Numpy] TVM implementation for binary ops

2019-11-07 Thread GitBox
reminisce commented on a change in pull request #16660: [WIP] [Numpy] TVM implementation for binary ops URL: https://github.com/apache/incubator-mxnet/pull/16660#discussion_r343827251 ## File path: contrib/tvmop/core/umath.py ## @@ -120,3 +121,327 @@ def _compute_binary_sc

[GitHub] [incubator-mxnet] ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551230288 @reminisce No. That is not how it works and please do not undo performance optimizations because of such fal

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551231994 @ptrendx I think you have misunderstood my comment. What I means is that we could try to fuse these zeroi

[GitHub] [incubator-mxnet] ptrendx commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
ptrendx commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-551232209 Isn't right now the period of finding those integration bugs and fixing them for 1.6 release? I will definitely look into this issue and fix it,

[GitHub] [incubator-mxnet] ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551233612 @sxjscience How would you like to fuse them without writing a new operator?

[GitHub] [incubator-mxnet] sxjscience commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
sxjscience commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-551235898 @ptrendx I think we are already in a code-freeze status and the simplest fix is to turn it off by default. We could easily turn it on in 1.6.1

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551237107 @ptrendx For example, we can later try to hybridize part of the imperative codes by the nvrtc approach th

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551237737 @ptrendx The problem of the `reset_arrays` approach is that the users may later require `fill_arrays`, wh

[GitHub] [incubator-mxnet] access2rohit commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-07 Thread GitBox
access2rohit commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-551240748 @wuxun-zhang thank you for your contribution and providing a quick fix. -

[GitHub] [incubator-mxnet] reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551241818 @ptrendx The the performance overhead in your benchmark really comes from the FFI and pushing ops to the a

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551243605 > scare me, frankly. Doing numpy-like fully imperative execution has no chance of being actually fast and

[GitHub] [incubator-mxnet] reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
reminisce commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551246891 Another big factor that may contribute to the slowdown of assigning zeros is through `a[:] = 0` which has

[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16755: Enabling large tensor support for binary broadcast operators

2019-11-07 Thread GitBox
ChaiBapchya opened a new pull request #16755: Enabling large tensor support for binary broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16755 ## Description ## In continuation of #16714 Add tests - arctan2 - hypot ## Checklist ## ### Essentials

[GitHub] [incubator-mxnet] ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
ptrendx commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551264243 Ok, let me address those comments 1 point at a time :-). - usage of TVM/nvrtc - I am generally in favor o

[GitHub] [incubator-mxnet] access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor

2019-11-07 Thread GitBox
access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor URL: https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-551276027 PR for the fix: https://github.com/apache/incubator-mxnet/pull/16737 --

[GitHub] [incubator-mxnet] access2rohit edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor

2019-11-07 Thread GitBox
access2rohit edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor URL: https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-551276027 PR: https://github.com/apache/incubator-mxnet/pull/16737 fixes the issue

[GitHub] [incubator-mxnet] access2rohit commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-07 Thread GitBox
access2rohit commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-551276539 @ChaiBapchya can you also verify ? This i

[GitHub] [incubator-mxnet] CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed

2019-11-07 Thread GitBox
CanyonWind commented on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-551277963 Hi @ZhennanQin, thanks a lot to your effort! I tried to verify the quantized model's

[GitHub] [incubator-mxnet] ptrendx commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
ptrendx commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-551284480 Ok, I sent a clarification email to dev@ as you are not actually the first person to reach out to me with this misunderstanding of code freeze. C

[GitHub] [incubator-mxnet] zachgk commented on issue #16560: It is easy to crash MXNet when tensor goes larger

2019-11-07 Thread GitBox
zachgk commented on issue #16560: It is easy to crash MXNet when tensor goes larger URL: https://github.com/apache/incubator-mxnet/issues/16560#issuecomment-551285938 Is this resolved now that #16570 is merged? This is an au

[GitHub] [incubator-mxnet] sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface

2019-11-07 Thread GitBox
sxjscience commented on issue #16716: [Numpy] Fix collect_params().zero_grad() in gluon numpy interface URL: https://github.com/apache/incubator-mxnet/pull/16716#issuecomment-551291281 @ptrendx Let me clarify a little bit: 1) In the nd interface, using `reset_arrays` as a short-t

[GitHub] [incubator-mxnet] CanyonWind edited a comment on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed

2019-11-07 Thread GitBox
CanyonWind edited a comment on issue #16424: [Channel Shuffle / Hard Swish / Hard Sigmoid] running in MKL CPU backend failed URL: https://github.com/apache/incubator-mxnet/issues/16424#issuecomment-551277963 Hi @ZhennanQin, thanks a lot for your effort! I tried to verify the quantized

[GitHub] [incubator-mxnet] leezu commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
leezu commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-551312428 I agree with @ptrendx, we should try to fix the bugs and ship the features if time allows. ---

[GitHub] [incubator-mxnet] sxjscience commented on issue #16747: Fused Op causes MXNetError

2019-11-07 Thread GitBox
sxjscience commented on issue #16747: Fused Op causes MXNetError URL: https://github.com/apache/incubator-mxnet/issues/16747#issuecomment-551316009 I received the clarification email about the meaning of code freeze and I agree with @ptrendx that we should try to fix it these days and cons

[GitHub] [incubator-mxnet] anirudh2290 opened a new pull request #16756: [WIP] Multithreaded inference backend support

2019-11-07 Thread GitBox
anirudh2290 opened a new pull request #16756: [WIP] Multithreaded inference backend support URL: https://github.com/apache/incubator-mxnet/pull/16756 ## Description ## Trying to run CI against 1.6 ## Checklist ## ### Essentials ### Please feel free to remove inapplicable ite

[GitHub] [incubator-mxnet] access2rohit commented on issue #16755: Enabling large tensor support for binary broadcast operators

2019-11-07 Thread GitBox
access2rohit commented on issue #16755: Enabling large tensor support for binary broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16755#issuecomment-551323548 LGTM ! Also paste the output for tests run. Can we also do large vector tests too for arctan2 ? I assume h

[GitHub] [incubator-mxnet] ptrendx commented on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
ptrendx commented on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551323521 I don't know enough to comment on the changes to the example etc., but the main fix from this PR (InferType in SliceChannel) looks good. @aniru

[GitHub] [incubator-mxnet] anirudh2290 edited a comment on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 edited a comment on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551323980 @ptrendx how did you come up with that sentence ? This is an automa

[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 commented on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551323980 @ptrendx how did you come up with that regex ? This is an automated messag

[GitHub] [incubator-mxnet] anirudh2290 edited a comment on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 edited a comment on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551323980 @ptrendx how did you come up with that sentence ? EDIT: Never mind, its the same error message. -

[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 commented on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551326529 @ptrendx good point about the other places where error is raised. The scope of the PR increased :) . Can we start with this PR first and I

[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators

2019-11-07 Thread GitBox
ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16755#issuecomment-551328930 Build flag ``` python -c "from mxnet.runtime import feature_list; print(feature_list())" [✖ CUDA, ✖ C

[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-11-07 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 961e79d Bump the publis

[GitHub] [incubator-mxnet] DickJC123 commented on issue #15167: Pointwise fusion for GPU

2019-11-07 Thread GitBox
DickJC123 commented on issue #15167: Pointwise fusion for GPU URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-551332574 After some investigation, I have an explanation and planned fix for the perf regression. To repeat what @ptrendx mentions, the real-time compilation

[GitHub] [incubator-mxnet] DickJC123 edited a comment on issue #15167: Pointwise fusion for GPU

2019-11-07 Thread GitBox
DickJC123 edited a comment on issue #15167: Pointwise fusion for GPU URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-551332574 After some investigation, I have an explanation and planned fix for the perf regression. To repeat what @ptrendx mentions, the real-time comp

[GitHub] [incubator-mxnet] larroy commented on issue #16412: Cleanup output of docker cache generation

2019-11-07 Thread GitBox
larroy commented on issue #16412: Cleanup output of docker cache generation URL: https://github.com/apache/incubator-mxnet/pull/16412#issuecomment-551334777 Thanks for your review. So going forward will you maintain this infrastructure then?

[GitHub] [incubator-mxnet] anirudh2290 opened a new issue #16757: Exceptions thrown in InferType before doing reverse inference

2019-11-07 Thread GitBox
anirudh2290 opened a new issue #16757: Exceptions thrown in InferType before doing reverse inference URL: https://github.com/apache/incubator-mxnet/issues/16757 ## Description Some operators are missing logic for reverse inference: inferring the type of inputs from outputs. Instead, exc

[GitHub] [incubator-mxnet] wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor

2019-11-07 Thread GitBox
wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice operator for Large Tensor URL: https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-551337898 Glad it works. This is an automated messag

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16731: [WIP] Static link MKL-DNN library

2019-11-07 Thread GitBox
pengzhao-intel commented on issue #16731: [WIP] Static link MKL-DNN library URL: https://github.com/apache/incubator-mxnet/pull/16731#issuecomment-551341109 > @pengzhao-intel , sure, PR description is added. Very great explanation. Ping me when your PR is ready to mer

[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-07 Thread GitBox
pengzhao-intel merged pull request #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737 This is an automated message from the Apache Git Service. To

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators

2019-11-07 Thread GitBox
pengzhao-intel commented on issue #16737: [MKLDNN] use dim_t instead of int in slice/transpose operators URL: https://github.com/apache/incubator-mxnet/pull/16737#issuecomment-551343510 I am merging this PR first and feel free to ping us if other issues are found. ---

[incubator-mxnet] branch master updated (c38b527 -> 5dfa121)

2019-11-07 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c38b527 [MKLDNN] Fix int8 convolution/fc bias overflow (#16734) add 5dfa121 [MKLDNN] use dim_t in

[incubator-mxnet] branch master updated (c38b527 -> 5dfa121)

2019-11-07 Thread patriczhao
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from c38b527 [MKLDNN] Fix int8 convolution/fc bias overflow (#16734) add 5dfa121 [MKLDNN] use dim_t in

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
pengzhao-intel commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-551345467 @knjwhn Does current int8 FC OP work for you? Could you provide some backgrounds of your workload and target with int

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16749: Ask for advice about using my int8gemm

2019-11-07 Thread GitBox
pengzhao-intel commented on issue #16749: Ask for advice about using my int8gemm URL: https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-551346152 More info in our blog: https://medium.com/apache-mxnet/model-quantization-for-production-level-neural-network-inference-f54

[GitHub] [incubator-mxnet] hzfan commented on a change in pull request #16660: [WIP] [Numpy] TVM implementation for binary ops

2019-11-07 Thread GitBox
hzfan commented on a change in pull request #16660: [WIP] [Numpy] TVM implementation for binary ops URL: https://github.com/apache/incubator-mxnet/pull/16660#discussion_r343961440 ## File path: contrib/tvmop/core/umath.py ## @@ -120,3 +121,327 @@ def _compute_binary_scalar

[GitHub] [incubator-mxnet] ptrendx commented on issue #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
ptrendx commented on issue #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748#issuecomment-551352067 Sure, having this PR fix just SliceChannel is totally fine. This is an automat

[GitHub] [incubator-mxnet] leezu closed issue #16753: fail to build using docker

2019-11-07 Thread GitBox
leezu closed issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub an

[GitHub] [incubator-mxnet] leezu commented on issue #16753: fail to build using docker

2019-11-07 Thread GitBox
leezu commented on issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753#issuecomment-551354006 I believe `ci/build.py` is not suppported outside of the CI environment. So I'm closing the issue. Do you have any issues with cross-compile? -

[GitHub] [incubator-mxnet] leezu edited a comment on issue #16753: fail to build using docker

2019-11-07 Thread GitBox
leezu edited a comment on issue #16753: fail to build using docker URL: https://github.com/apache/incubator-mxnet/issues/16753#issuecomment-551354006 I believe `ci/build.py` is not suppported outside of the CI environment. So I'm closing the issue. Feel free to reopen if you believe it is

[GitHub] [incubator-mxnet] vexilligera commented on issue #16730: [NumPy] NumPy support for linalg.inv

2019-11-07 Thread GitBox
vexilligera commented on issue #16730: [NumPy] NumPy support for linalg.inv URL: https://github.com/apache/incubator-mxnet/pull/16730#issuecomment-551354516 @reminisce @haojin2 This is an automated message from the Apache Git

[GitHub] [incubator-mxnet] leezu opened a new issue #16758: [Estimator] Emits warnings when mixing user-defined and default event handlers

2019-11-07 Thread GitBox
leezu opened a new issue #16758: [Estimator] Emits warnings when mixing user-defined and default event handlers URL: https://github.com/apache/incubator-mxnet/issues/16758 ## Description Estimator emits a arguably useless warning, when users specify `event_handlers`. ### Error Me

[GitHub] [incubator-mxnet] anirudh2290 merged pull request #16748: Fix SliceChannel Type inference

2019-11-07 Thread GitBox
anirudh2290 merged pull request #16748: Fix SliceChannel Type inference URL: https://github.com/apache/incubator-mxnet/pull/16748 This is an automated message from the Apache Git Service. To respond to the message, please log

[incubator-mxnet] branch master updated (5dfa121 -> a37dcd4)

2019-11-07 Thread anirudh2290
This is an automated email from the ASF dual-hosted git repository. anirudh2290 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5dfa121 [MKLDNN] use dim_t instead of int in slice/transpose operators (#16737) add a37dcd4 Fix

[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators

2019-11-07 Thread GitBox
ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16755#issuecomment-551359931 > LGTM ! Also paste the output for tests run. > Can we also do large vector tests too for arctan2 ? I a

[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators

2019-11-07 Thread GitBox
ChaiBapchya commented on issue #16755: Enabling large tensor support for binary broadcast operators URL: https://github.com/apache/incubator-mxnet/pull/16755#issuecomment-551360769 Vector test ``` nosetests tests/nightly/test_large_vector.py:test_binary_broadcast . -

[GitHub] [incubator-mxnet] TaoLv commented on issue #11417: libomp.so dependency (need REAL fix)

2019-11-07 Thread GitBox
TaoLv commented on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-551361750 > btw there's a problem with the mkldnn build. It pulls in libgomp always: @cjolivier01 It seems that even MXNet is not built with

[GitHub] [incubator-mxnet] zhly0 opened a new issue #16759: LeakyReLU

2019-11-07 Thread GitBox
zhly0 opened a new issue #16759: LeakyReLU URL: https://github.com/apache/incubator-mxnet/issues/16759 ## Description (in the doc: http://mxnet.incubator.apache.org/api/python/docs/api/symbol/op/index.html#mxnet.symbol.op.LeakyReLU.it is said that leakyrelu is activate on input elem

[GitHub] [incubator-mxnet] TaoLv edited a comment on issue #11417: libomp.so dependency (need REAL fix)

2019-11-07 Thread GitBox
TaoLv edited a comment on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-551361750 > btw there's a problem with the mkldnn build. It pulls in libgomp always: @cjolivier01 It seems that even MXNet is not buil

[GitHub] [incubator-mxnet] TaoLv edited a comment on issue #11417: libomp.so dependency (need REAL fix)

2019-11-07 Thread GitBox
TaoLv edited a comment on issue #11417: libomp.so dependency (need REAL fix) URL: https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-551361750 > btw there's a problem with the mkldnn build. It pulls in libgomp always: @cjolivier01 It seems that even MXNet is not buil

[GitHub] [incubator-mxnet] leezu closed issue #16758: [Estimator] Emits warnings when mixing user-defined and default event handlers

2019-11-07 Thread GitBox
leezu closed issue #16758: [Estimator] Emits warnings when mixing user-defined and default event handlers URL: https://github.com/apache/incubator-mxnet/issues/16758 This is an automated message from the Apache Git Service.

  1   2   >