wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456988184
@zhreshold solved. Thank you!
This is an automated
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456698039
@zhreshold
I test it on the server with Ubuntu 14.04, Tesla M40(24G) x 4, CUDA 8.0 just
now.
The training spe
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456607472
@zhreshold Thank you!
It’s flaky.
I test it on the server with Ubuntu 14.04, Tesla M40(24G) x 4, CUDA 9.0.
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455779305
@zhreshold @szha
Hello! I have written a minimum reproducible example which doesn't need
dataset.
[Code](htt
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455741203
@szha
In my experiment, the input size is (9,3,300 to 512,300 to 512), 9 is the
batch size and 3 is the number of
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455726968
@adaa I don't know. I found the speeds are the same between two versions
when input shapes are fixed.
In my c
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs
are variant
URL:
https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455717882
@piyushghai Thanks.
@zhreshold In my experiment, it's a fully convolutional network model(vgg16
without FC layers