[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346672168 Linux (Ubuntu 16.04 LTS - Gnome 3 flavor) I meant more along the lines as to if thi

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671750 @cjolivier01 what do you think of script though? Likewise :) (abroad in Germany) -

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671615 I will contact them and have them look into it --

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671587 Ok, then it appears that this might be nvidia docker image specific -

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671199 personal machine is built from master ---

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671164 DGX has a nvidia docker image. Local machine is pip install of the cuda verison --

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346670791 haha :) yeah, I didnt implement it there though. That is odd, I ran mine on a DGX..

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346669382 cool :) This is an autom

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346669200 is file 3 also in the same directory? ---

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346660854 @cjolivier01 No worries. I appreciate your assistance. However, this behavior could signa

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-23 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346624513 @cjolivier01 any ideas?

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-22 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346463918 @cjolivier01 please see file 2 and 3. File 3 is the net definition, File 2 imports the n

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-22 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346462374 @cjolivier01 I understand this principal and that makes sense. However, you are overlooki

[GitHub] SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!)

2017-11-21 Thread GitBox
SumNeuron commented on issue #8751: Distributed Training has inverse results when imported (8 GPUS is slower than 1!) URL: https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346143311 @piiswrong I agree that MNIST is small, but **1.)** that does not explain that in functio