SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346672168
Linux (Ubuntu 16.04 LTS - Gnome 3 flavor)
I meant more along the lines as to if thi
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671750
@cjolivier01 what do you think of script though?
Likewise :) (abroad in Germany)
-
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671615
I will contact them and have them look into it
--
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671587
Ok, then it appears that this might be nvidia docker image specific
-
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671199
personal machine is built from master
---
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346671164
DGX has a nvidia docker image.
Local machine is pip install of the cuda verison
--
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346670791
haha :) yeah, I didnt implement it there though.
That is odd, I ran mine on a DGX..
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346669382
cool :)
This is an autom
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346669200
is file 3 also in the same directory?
---
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346660854
@cjolivier01 No worries. I appreciate your assistance. However, this
behavior could signa
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346624513
@cjolivier01 any ideas?
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346463918
@cjolivier01 please see file 2 and 3. File 3 is the net definition, File 2
imports the n
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346462374
@cjolivier01 I understand this principal and that makes sense. However, you
are overlooki
SumNeuron commented on issue #8751: Distributed Training has inverse results
when imported (8 GPUS is slower than 1!)
URL:
https://github.com/apache/incubator-mxnet/issues/8751#issuecomment-346143311
@piiswrong I agree that MNIST is small, but **1.)** that does not explain
that in functio
14 matches
Mail list logo