wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550191232
You can try to use `export MKLDNN_VERBOSE=1` to get these logs.
Also I just filed a [PR
](https://git
access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550190331
> @access2rohit is this a necessary part for r1.6 or we can fix in master?
@pengzhao-intel
Yes, it is
access2rohit edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550190331
> @access2rohit is this a necessary part for r1.6 or we can fix in master?
@pengzhao-intel
access2rohit edited a comment on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550189449
@pengzhao-intel I tried with branch v1.6.x @rongzha1 can you try with that
branch too.
Also, I n
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 51c2065 Pointwise fusion for GPU (#15167)
add 0415a2f Eliminate common expressions (#15657)
No
access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550189449
@pengzhao-intel I tried with branch v1.6.x @rongzha1 can you try with that
branch too.
Also, I never go
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 0cbee04 Fix the index_t with int comparisoon
add 51c2065 Pointwise fusion for GPU (#15167)
No
This is an automated email from the ASF dual-hosted git repository.
apeforest pushed a change to branch benchmark
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 0cbee04 Fix the index_t with int comparisoon
add 51c2065 Pointwise fusion for GPU (#15167)
No
wuxun-zhang opened a new pull request #16737: [MKLDNN] use dim_t instead of int
in slice/transpose operators
URL: https://github.com/apache/incubator-mxnet/pull/16737
## Description ##
When mxnet is built with large tensor support, `int` or `unsigned int`
cannot properly handle some lar
feevos commented on issue #16736: Bug when iterating over HybridSequential
elements
URL:
https://github.com/apache/incubator-mxnet/issues/16736#issuecomment-550182869
Workaround that solves the problem (at some computational cost, I guess...):
```Python
class Demo(HybridBlock):
feevos commented on issue #16736: Bug when iterating over HybridSequential
elements
URL:
https://github.com/apache/incubator-mxnet/issues/16736#issuecomment-550172645
Some additional information: it seems the error relates to how many times
the initial input is passed from the conv layer
ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526373840
We have multiple improvements to BERT inference and training speed that we
would like to be part of 1.6 release:
- [x] Softmax optimiz
ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap
URL:
https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526373840
We have multiple improvements to BERT inference and training speed that we
would like to be part of 1.6 release:
- [x] Softmax optimiz
feevos opened a new issue #16736: Bug when iterating over HybridSequential
elements
URL: https://github.com/apache/incubator-mxnet/issues/16736
## Description
Dear all, there is a bug when iterating over a HybridSequential treated as a
container. This bug depends on the length of the c
apeforest opened a new pull request #16735: Use single-bit for mask in dropout
operator
URL: https://github.com/apache/incubator-mxnet/pull/16735
## Description ##
Use single bit in mask for dropout to reduce memory.
This PR fixes https://github.com/apache/incubator-mxnet/issues/15968
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new cb557d8 Bump the publis
xidulu edited a comment on issue #16638: [Numpy] Add sampling method for
bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#issuecomment-549897379
Correctness for distribution has been briefly verified by hand.
With the following code:
```
In [9]: (npx.random.berno
wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550154021
Also tested with AWS EC2 m5.8 instance, and found no error (master commit
3c404a512829d2894ffe3612dc3cb29a12
apeforest commented on issue #16184: Add large tensor nightly tests for MKL-DNN
operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#issuecomment-550151914
Since the currently nightly test is broken, could you please run all the
tests offline and paste your output to this PR
apeforest commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r342923002
##
File path: tests/nightly/test_large_array.py
##
@@ -944,11 +997,14 @@ def test_re
apeforest commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r342922763
##
File path: tests/nightly/test_large_array.py
##
@@ -782,8 +801,30 @@ def test_act
apeforest commented on a change in pull request #16184: Add large tensor
nightly tests for MKL-DNN operators
URL: https://github.com/apache/incubator-mxnet/pull/16184#discussion_r342922471
##
File path: tests/nightly/test_large_array.py
##
@@ -619,9 +631,16 @@ def testSoft
wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550138759
@TaoLv Yes, we have enabled the int64 flag, building command is `make
USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl
haojin2 edited a comment on issue #16718: Cleaner API for utilizing all GPUs if
available
URL:
https://github.com/apache/incubator-mxnet/issues/16718#issuecomment-550133675
@ChaiBapchya @nickguletskii There's one
[function](https://github.com/d2l-ai/d2l-en/blob/d9eb3df062a3abf6bb0af21a022
haojin2 commented on issue #16718: Cleaner API for utilizing all GPUs if
available
URL:
https://github.com/apache/incubator-mxnet/issues/16718#issuecomment-550133675
@ChaiBapchya @nickguletskii There's one
(function)[https://github.com/d2l-ai/d2l-en/blob/d9eb3df062a3abf6bb0af21a022a03edd7
haojin2 commented on issue #16726: fatal error: cub/cub.cuh: No such file or
directory
URL:
https://github.com/apache/incubator-mxnet/issues/16726#issuecomment-550133290
@394781865 how are you doing the build?
Did you checkout all the submodules?
Can you try `git submodule update --
szha commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342903386
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11 @@ d
szha commented on a change in pull request #16716: [Numpy][WIP] Fix
collect_params().zero_grad() in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716#discussion_r342902762
##
File path: python/mxnet/gluon/parameter.py
##
@@ -904,7 +904,11 @@ d
TaoLv commented on issue #16732: MKLDNN-1.0 doesn't support slice operator for
Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550121522
@rongzha1 Have you ever tried to build MXNet with `USE_INT64_TENSOR_SIZE=1`?
-
rongzha1 commented on issue #16732: MKLDNN-1.0 doesn't support slice operator
for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550120630
@access2rohit could you add some bt info? thanks
--
rongzha1 commented on issue #16732: MKLDNN-1.0 doesn't support slice operator
for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550118805
Can't reproduce this case in our skylake machine . Will keep debug
(mxnet) [rongzha1@mlt-skx141 rong_git_mxne
wuxun-zhang commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550102920
@pengzhao-intel I am looking into this.
Thi
pengzhao-intel commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550102377
@access2rohit is this a necessary part for r1.6 or we can fix in master?
---
sxjscience commented on issue #16708: Training an FPN model using
grad_req="add" causes rapid divergence, while manually implemented gradient
accumulation works fine
URL:
https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-550101790
I've confirmed that this issue do exist
pengzhao-intel commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-550101108
@rongzha1 @wuxun-zhang could you take a look ASAP?
-
ZhennanQin opened a new pull request #16734: [MKLDNN] Fix int8 convolution bias
overflow
URL: https://github.com/apache/incubator-mxnet/pull/16734
## Description ##
When weights are too small(~1e-6), bias may get overflow in int32. This is
to handle this case, and can provide correct re
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 13a0861 Bump the publis
ThomasDelteil opened a new pull request #16733: fix R docs
URL: https://github.com/apache/incubator-mxnet/pull/16733
Add the link to the latest R docs rather than the old one
This is the old pdf:
https://s3.amazonaws.com/mxnet-prod/docs/R/mxnet-r-reference-manual.pdf
This is the new p
sxjscience commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550076892
Would it be more appropriate to turn off fused_op by default? We can still
provide documentation for the user to turn on it manually.
djaym7 commented on issue #14875: MXNet to ONNX export bug
URL:
https://github.com/apache/incubator-mxnet/issues/14875#issuecomment-550039126
Does anyone have any update on this ? I am having the same issue ...
This is an aut
stereomatchingkiss commented on issue #16724: Example link of the image
classification show 404
URL:
https://github.com/apache/incubator-mxnet/issues/16724#issuecomment-550036051
True link should be --
https://github.com/dmlc/mxnet-notebooks/blob/master/python/tutorials/predict_imagenet.i
rondogency commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550032998
@ptrendx @Caenorst @nvchai
This is an automated message from the Apache Git Servi
rondogency edited a comment on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550008831
Hi @DickJC123 looks like this PR increases the CI centos-gpu testing time by
over 100%, which is caused by some gpu tests running time incre
rondogency edited a comment on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550008831
Hi @DickJC123 looks like this PR increases the CI centos-gpu testing time by
over 100%, which is caused by some gpu tests running time incre
rondogency commented on issue #15167: Pointwise fusion for GPU
URL: https://github.com/apache/incubator-mxnet/pull/15167#issuecomment-550008831
Hi @DickJC123 looks like this PR increases the CI centos-gpu testing time by
over 100%, which is caused by some gpu tests running time increasing b
access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-549984369
@mxnet-label-bot add [MKLDNN]
This is an a
access2rohit commented on issue #16732: MKLDNN-1.0 doesn't support slice
operator for Large Tensor
URL:
https://github.com/apache/incubator-mxnet/issues/16732#issuecomment-549977756
@pengzhao-intel @TaoLv
This is an automat
access2rohit opened a new issue #16732: MKLDNN-1.0 doesn't support slice
operator
URL: https://github.com/apache/incubator-mxnet/issues/16732
## Description
when MXNet is built for CPU MKL slice operator doesn't work.
### Error Message
`could not initialize a sub-memory`
haojin2 commented on a change in pull request #16728: [DO NOT MERGE YET]
Support boolean elemwise/broadcast binary add, multiply and true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16728#discussion_r342749974
##
File path: src/operator/operator_tune-inl.h
##
haojin2 commented on issue #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699#issuecomment-549977200
@cjolivier01 Yeah that's exactly part of what I did in #16711, also in this
PR you could see that there're some places I gave up using the type swit
codecov-io commented on issue #16638: [WIP] [Numpy] Add sampling method for
bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#issuecomment-549974047
#
[Codecov](https://codecov.io/gh/apache/incubator-mxnet/pull/16638?src=pr&el=h1)
Report
> Merging
[#16638](https://co
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new edb186e Bump the publis
eric-haibin-lin commented on issue #16022: [MXNET-1421] Added (CuDNN)BatchNorm
operator to the list of mirrored operators
URL: https://github.com/apache/incubator-mxnet/pull/16022#issuecomment-549955271
There was some update in mxnet website build. Could you sync with mxnet
master? thanks
apeforest commented on issue #16395: [WIP] Use single bit for mask in Dropout
URL: https://github.com/apache/incubator-mxnet/pull/16395#issuecomment-549946505
Closed this PR to avoid unnecessary CI run upon update. I will open another
PR once it's ready for testing.
---
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 0c5677e Faster GPU NMS operator (#16542)
add 3c404a5 Mixed data type binary ops (#16699)
No new r
reminisce merged pull request #16699: Mixed data type binary ops
URL: https://github.com/apache/incubator-mxnet/pull/16699
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 0c5677e Faster GPU NMS operator (#16542)
add 3c404a5 Mixed data type binary ops (#16699)
No new r
xidulu commented on issue #16638: [WIP] [Numpy] Add sampling method for
bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#issuecomment-549897379
Unit tests to be added,
Correctness is briefly verified by hand.
```
In [9]: (npx.random.bernoulli(prob=prob, size=(100
ptrendx commented on issue #16725: Failed test:
test_gluon_gpu.test_rnn_unroll_variant_length
URL:
https://github.com/apache/incubator-mxnet/issues/16725#issuecomment-549893887
Thanks for the report, we will look into this issue.
---
xidulu commented on a change in pull request #16638: [WIP] [Numpy] Add sampling
method for bernoulli
URL: https://github.com/apache/incubator-mxnet/pull/16638#discussion_r342632443
##
File path: python/mxnet/symbol/numpy_extension/random.py
##
@@ -0,0 +1,57 @@
+# Licensed
TaoLv opened a new pull request #16731: [WIP] Static link MKL-DNN library
URL: https://github.com/apache/incubator-mxnet/pull/16731
## Description ##
(Brief description on what this PR is about)
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items
cjolivier01 commented on a change in pull request #16728: [DO NOT MERGE YET]
Support boolean elemwise/broadcast binary add, multiply and true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16728#discussion_r342619026
##
File path: src/operator/operator_tune-inl.h
##
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 410578d Bump the publis
pengzhao-intel commented on issue #16691: [WIP] Quantized Embedding
URL: https://github.com/apache/incubator-mxnet/pull/16691#issuecomment-549810548
@xinyu-intel is this still WIP?
This is an automated message from the Apache
vexilligera opened a new pull request #16730: [NumPy] NumPy support for
linalg.inv
URL: https://github.com/apache/incubator-mxnet/pull/16730
## Description ##
(Brief description on what this PR is about)
tentative PR
## Checklist ##
### Essentials ###
Please feel free to
pedro-abundio-wang commented on issue #16527: ErrStr:no kernel image is
available for execution on the device
URL:
https://github.com/apache/incubator-mxnet/issues/16527#issuecomment-549794335
![image](https://user-images.githubusercontent.com/50159788/68206065-c7327d00-0006-11ea-81a0
hgt312 opened a new pull request #16729: [NumPy][TVM] NumPy Unary Operator
Using TVM Infra
URL: https://github.com/apache/incubator-mxnet/pull/16729
WIP
This is an automated message from the Apache Git Service.
To respond to
ShoufaChen closed issue #16696: ReLU6 and Swish function
URL: https://github.com/apache/incubator-mxnet/issues/16696
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
haojin2 opened a new pull request #16728: [DO NOT MERGE YET] Support boolean
elemwise/broadcast binary add, multiply and true_divide
URL: https://github.com/apache/incubator-mxnet/pull/16728
## Description ##
Support operations between 2 boolean-typed tensors, currently `add`,
`multiply
vexilligera closed pull request #16727: [NumPy] Add NumPy support for linalg.inv
URL: https://github.com/apache/incubator-mxnet/pull/16727
This is an automated message from the Apache Git Service.
To respond to the message, p
vexilligera opened a new pull request #16727: [NumPy] Add NumPy support for
linalg.inv
URL: https://github.com/apache/incubator-mxnet/pull/16727
## Description ##
(Brief description on what this PR is about)
tentative PR
## Checklist ##
### Essentials ###
Please feel free
394781865 opened a new issue #16726: fatal error: cub/cub.cuh: No such file or
directory
URL: https://github.com/apache/incubator-mxnet/issues/16726
src/operator/numpy/random/./../../tensor/./././cast_storage-inl.cuh:28:23:
fatal error: cub/cub.cuh: No such file or directory compilation
72 matches
Mail list logo