[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456988184 @zhreshold solved. Thank you! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456698039 @zhreshold I test it on the server with Ubuntu 14.04, Tesla M40(24G) x 4, CUDA 8.0 just now. The training speed is 40+ samples/sec. I think the performance drops because of driver rather than MXNet. The CUDA 9.0 driver installed on the server is not matched with latest MXNet. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-456607472 @zhreshold Thank you! It’s flaky. I test it on the server with Ubuntu 14.04, Tesla M40(24G) x 4, CUDA 9.0. When I remove all dilated convolutions (the dilation of convolution is greater than 1),there will be no obvious difference between MXNet 1.3 and 1.5 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455779305 @zhreshold @szha Hello! I have written a minimum reproducible example which doesn't need dataset. [Code](https://gist.githubusercontent.com/wkcn/69f0f6d2ca467816dc481a00c225104f/raw/2899896f42a920ff0fde5ff93b9a16d16aec507f/test_fcn_for_mxnet.py) I test it on a machine which owns Tesla M40 (21GB) x 4. Here is the result: MXNet 1.5.0: 10 images / sec MXNet 1.3.0: 40+ images / sec MXNet is installed by `pip install mxnet-cu90 --pre` or `pip install mxnet-cu90==1.3.0` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455741203 @szha In my experiment, the input size is (9,3,300 to 512,300 to 512), 9 is the batch size and 3 is the number of channels. I will write a minimum reproduce example later. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455726968 @adaa I don't know. I found the speeds are the same between two versions when input shapes are fixed. In my code, I call 'hybridize()' first, then call 'hybridize(static_alloc=True)'. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant
wkcn commented on issue #13928: MXNet 1.5.0 is slower than 1.3.0 when intputs are variant URL: https://github.com/apache/incubator-mxnet/issues/13928#issuecomment-455717882 @piyushghai Thanks. @zhreshold In my experiment, it's a fully convolutional network model(vgg16 without FC layers), whose inputs are variant. I guess that the performance of faster r-cnn is also dropped in MXNrt 1.5.0. I will check the performance of faster r-cnn, or write a minimum reproduce example. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services